When Humans Partner with AI to Expand Knowledge
Here’s John Robb talking about the way we humans will partner with artificial intelligence systems to greatly expand the base of human knowledge. This is a topic close that I’m becoming increasingly interested in, and it was actually the focus of the talk I gave a couple weeks back in Singapore.
An excerpt from John’s piece, to give you a sense for where he’s going here:
However, all of that earlier innovation is child’s play compared to what is now possible. With limited AGI, it will be possible to exponentially accelerate the gathering, improvement, and sharing of human understanding. Here’s how this is done in its most basic form (currently called cloud robotics):
* An AGI learns a task or a concept through experience (this is becoming very easy to do with model free deep learning, Big Data and Big Sim as I pointed out yesterday).
* That understanding is packaged, uploaded, and stored in the cloud.
* Any other AGI can download that understanding as needed.
Here’s the complication: AI will be great for extracting human understanding and storing it for anyone to access, or use via an automated agent, but there is still a huge hurdle to re-integrated that new information back into a human mind. We are associative learners and, as far as we know, each of our brains stores knowledge in unique schemas. The only way we currently know how to integrate new knowledge into the existing schema is through the rather slow, associative process of learning that requires making new connections between neural pathways. In other words, the “I just learned kung fu” trick from The Matrix seems like it will be extremely difficult, if not impossible, to accomplish because it would require an instantaneous growth/reconfiguration of physical neural pathways in the human brain. Yes, this knowledge could be stored in some hardware in the brain, but at some point, for it to become useful – and this is especially true with tacit knowledge like kung fu, it has to make the jump into the biology.
So, in other words, it’s getting easier and easier to pull knowledge out of the human brain, but I think we’re going to run into some fundamental issues with stuffing it back in. And this takes nothing away from the excellent article that John’s written here. I’m simply trying to point out one of the limits that occurs to me.