This is one of the photographs that hangs in my office and it’s a quote from Buckminster Fuller which says:
Now combine that thinking with the fact that, back in 2011, when I was at the Charleston Conference and attended a session titled “New Initiatives in Open Research” where I heard Cliff Lynch and the late Lee Dirks speak. Cliff said:
Today, what I see and experience in the University environment is the pace of knowledge creation becoming so intense and so fast that our current tools for researching, building and expressing new knowledge are outdated. And thus the existing processes we’ve automated are equally outdated. This strikes me as a set of problems in need of a very large fix.
So I want to introduce what I think is a very exciting step forward in beginning to address those issues. What we’re doing is starting a new initiative with the high tech firm, Exaptive. Here is a video describing their product, with an introduction to the initiative. I encourage you to watch it before reading any further.
Ok, you’ve watched it and now you’re back? Great.
Let me fill you in on the thinking behind this announcement. Consistent with what I’ve said in articles and related blog posts is the fact that I want to provide to our library users/members some substantially different capabilities than those they get when they use a generic search tool like Google Scholar. As noted in my writings, most discovery/search tools are querying repositories or databases that contain existing knowledge metadata and content.
Certainly that’s very valuable. But those existing tools make no real provision for analyzing this existing knowledge or drawing correlations between data sources, nor do they suggest overlaps, visualize the results or allow the user to easily bring together the people behind the knowledge found. The existing tools do not give the capability to start building new knowledge, only to find existing knowledge. However the Exaptive product moves us down the path towards knowledge creation and more. The implications of this are really far reaching.
For example, the first project we’re moving forward on is one in which a researcher is looking for a concept, but one that over the centuries has existed in many cultures and languages and under many different terms. Our researcher knows English and several other languages, but not all the languages in which that concept might be expressed. So, what he is looking for is a tool that will function as an “authority file,” if you will, that’ll essentially provide “see” and “see also” references across those many languages. By using known taxonomies, linked data, library related and accessible authority files, we hope to be able to do the analysis of data sources, then visualize the results to show the overlap and correlations that exists between data sources. We believe when data sources are analyzed this way the Exaptive product should provide tremendous new insight into the topic and field of study.
Another exciting part of the Exaptive product is the ability to create what are called “cognitive networks,” groups comprised of the researchers responsible for creating the research and research data found and utilized in the analysis phase. Unlike social networks where you have to slowly and manually build your connections or friends, the cognitive network is built automatically as researchers explore, select, filter and analyze the data they need. The result is that these cognitive networks facilitate a researcher’s work, instead of adding to it. This cognitive network can become a set of collaborators or peers that, if willing, could be focused on analyzing, vetting and refining the new knowledge from inception to dissemination. (Yes, obviously, trust plays a huge part in this and must be dealt with as part of the model). It’s a model that would be more capable of scaling to incorporate the vast amounts of research being conducted today and would increase the speed of dissemination of the new knowledge that results because it isn’t just dependent upon the publication of physical artifacts, such as papers or books (although that could still be done). Rather it would accommodate knowledge being born digitally, and once vetted by the cognitive network, could quickly be disseminated to others for them to continue the cycle and build upon yet further. Think about how powerful that could be in creating new knowledge!
The Exaptive product, when coupled with what we've already got in place at the OU Libraries (our Discovery system, open journal/open access publication system, repositories and other) will allow us to move further and faster in helping to evolve ideas into new knowledge.
One thing I need to say at this point is that doing this is both a technological challenge and a change management challenge. If this project is successful, it has the capability to remarkably change the engagement and knowledge creation experience for many people. To smooth this process, we will need to educate our users/members on the new needs we’re addressing and how and why it’s a major step forward. If you’ve read articles/books about how to do change management, you’ll know one of the best ways to do this is to work with thought-leaders on and in our campus and community and provide them with the extra support needed to learn and use these new tools. We need to do that in order to ensure they are very successful in doing so. If we do, it’s a win-win for all involved. It puts users on the front edge of research and dissemination in their field and it gives us a success case to point towards as we talk with others and try to inspire them.
Of course, those that wish to work in isolation can continue to do so, even with this new model. However, new value would be added to ideas by bringing multi-disciplinary and multi-faceted viewpoints to the table throughout the lifetime of an idea, which will help to make these ideas substantially more valuable and more applicable in the end. We already see the health sciences field moving in this direction because they so clearly understand the inter-connected nature of the organs of the human body and the need to bring researchers together as ideas are developed. The Open Science Framework is another model where collaboration and shared data sets occurs early in the research process.
As I said above, there are lots of implications for new models of knowledge creation based on this initiative. Existing culture and change are two of the largest challenges early in the process. But first, we’re going to focus on getting the technological foundations in place and then see what we can do. Stay tuned!
NOTE: Those at the University of Oklahoma interested in having a departmental demonstration of this technology and/or meeting with key project team members, should contact me at: carl(dot)grant(at)ou(dot)edu
"You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.”It’s a line of thinking I adhere to frequently. So it’ll be no surprise to those of you who have followed this blog that one of my pet projects is not to try and perfect existing discovery solutions, but rather to build a Knowledge Creation Platform. For a starting description of that concept, see this article.
Now combine that thinking with the fact that, back in 2011, when I was at the Charleston Conference and attended a session titled “New Initiatives in Open Research” where I heard Cliff Lynch and the late Lee Dirks speak. Cliff said:
“If you do the math, you will find horrifying numbers, something like a scientific paper is published every minute or two. It means you’re buried. It means everybody’s buried.”That fact really stuck with me.
Today, what I see and experience in the University environment is the pace of knowledge creation becoming so intense and so fast that our current tools for researching, building and expressing new knowledge are outdated. And thus the existing processes we’ve automated are equally outdated. This strikes me as a set of problems in need of a very large fix.
So I want to introduce what I think is a very exciting step forward in beginning to address those issues. What we’re doing is starting a new initiative with the high tech firm, Exaptive. Here is a video describing their product, with an introduction to the initiative. I encourage you to watch it before reading any further.
Ok, you’ve watched it and now you’re back? Great.
Let me fill you in on the thinking behind this announcement. Consistent with what I’ve said in articles and related blog posts is the fact that I want to provide to our library users/members some substantially different capabilities than those they get when they use a generic search tool like Google Scholar. As noted in my writings, most discovery/search tools are querying repositories or databases that contain existing knowledge metadata and content.
Certainly that’s very valuable. But those existing tools make no real provision for analyzing this existing knowledge or drawing correlations between data sources, nor do they suggest overlaps, visualize the results or allow the user to easily bring together the people behind the knowledge found. The existing tools do not give the capability to start building new knowledge, only to find existing knowledge. However the Exaptive product moves us down the path towards knowledge creation and more. The implications of this are really far reaching.
For example, the first project we’re moving forward on is one in which a researcher is looking for a concept, but one that over the centuries has existed in many cultures and languages and under many different terms. Our researcher knows English and several other languages, but not all the languages in which that concept might be expressed. So, what he is looking for is a tool that will function as an “authority file,” if you will, that’ll essentially provide “see” and “see also” references across those many languages. By using known taxonomies, linked data, library related and accessible authority files, we hope to be able to do the analysis of data sources, then visualize the results to show the overlap and correlations that exists between data sources. We believe when data sources are analyzed this way the Exaptive product should provide tremendous new insight into the topic and field of study.
Another exciting part of the Exaptive product is the ability to create what are called “cognitive networks,” groups comprised of the researchers responsible for creating the research and research data found and utilized in the analysis phase. Unlike social networks where you have to slowly and manually build your connections or friends, the cognitive network is built automatically as researchers explore, select, filter and analyze the data they need. The result is that these cognitive networks facilitate a researcher’s work, instead of adding to it. This cognitive network can become a set of collaborators or peers that, if willing, could be focused on analyzing, vetting and refining the new knowledge from inception to dissemination. (Yes, obviously, trust plays a huge part in this and must be dealt with as part of the model). It’s a model that would be more capable of scaling to incorporate the vast amounts of research being conducted today and would increase the speed of dissemination of the new knowledge that results because it isn’t just dependent upon the publication of physical artifacts, such as papers or books (although that could still be done). Rather it would accommodate knowledge being born digitally, and once vetted by the cognitive network, could quickly be disseminated to others for them to continue the cycle and build upon yet further. Think about how powerful that could be in creating new knowledge!
The Exaptive product, when coupled with what we've already got in place at the OU Libraries (our Discovery system, open journal/open access publication system, repositories and other) will allow us to move further and faster in helping to evolve ideas into new knowledge.
One thing I need to say at this point is that doing this is both a technological challenge and a change management challenge. If this project is successful, it has the capability to remarkably change the engagement and knowledge creation experience for many people. To smooth this process, we will need to educate our users/members on the new needs we’re addressing and how and why it’s a major step forward. If you’ve read articles/books about how to do change management, you’ll know one of the best ways to do this is to work with thought-leaders on and in our campus and community and provide them with the extra support needed to learn and use these new tools. We need to do that in order to ensure they are very successful in doing so. If we do, it’s a win-win for all involved. It puts users on the front edge of research and dissemination in their field and it gives us a success case to point towards as we talk with others and try to inspire them.
Of course, those that wish to work in isolation can continue to do so, even with this new model. However, new value would be added to ideas by bringing multi-disciplinary and multi-faceted viewpoints to the table throughout the lifetime of an idea, which will help to make these ideas substantially more valuable and more applicable in the end. We already see the health sciences field moving in this direction because they so clearly understand the inter-connected nature of the organs of the human body and the need to bring researchers together as ideas are developed. The Open Science Framework is another model where collaboration and shared data sets occurs early in the research process.
As I said above, there are lots of implications for new models of knowledge creation based on this initiative. Existing culture and change are two of the largest challenges early in the process. But first, we’re going to focus on getting the technological foundations in place and then see what we can do. Stay tuned!
NOTE: Those at the University of Oklahoma interested in having a departmental demonstration of this technology and/or meeting with key project team members, should contact me at: carl(dot)grant(at)ou(dot)edu