Voices in AI – Episode 79: A Conversation with Naveen Rao



Today’s leading minds talk AI with host Byron Reese

About this Episode

Episode 79 of Voices in AI features host Byron Reese and Naveen Rao discussing intelligence, the mind, consciousness, AI, and what the day to day looks like at Intel. Byron and Naveen also delve into the implications of an AI future.

Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Exerpt

Byron Reese: This is Voices in AI brought to you by GigaOm, and I’m Byron Reese. Today I’m excited that our guest is Naveen Rao. He is the Corporate VP and General Manager of Artificial Intelligence Products Group at Intel. He holds a Bachelor of Science in Electrical Engineering from Duke and a Ph.D. in Neuroscience from Brown University. Welcome to the show, Naveen.

Naveen Rao: Thank you. Glad to be here.

You’re going to give me a great answer to my standard opening question, which is: What is intelligence?

That is a great question. It really doesn’t have an agreed-upon answer. My version of this is about potential and capability. What I see as an intelligent system is a system that is capable of decomposing structure within data. By my definition, I would call a newborn human baby intelligent, because the potential is there, but the system is not yet trained with real experience. I think that’s different than other definitions, where we talk about the phenomenology of intelligence, where you can categorize things, and all of this. I think that’s where the outcropping of having actually learned the inherent structure of the world.

So, in what sense by that definition is artificial intelligence actually artificial? Is it artificial because we built it, or is it artificial because it’s not real intelligence? It’s like artificial turf; it just looks like intelligence.

No. I think it’s artificial because we built it. That’s all. There’s nothing artificial about it. The term intelligence doesn’t have to be on biological mush, it can be implemented on any kind of substrate. In fact, there’s even research on how slime mold, actually…

Right. It can work mazes…

… can solve computational problems, Yeah.

How does it do that, by the way? That’s really a pretty staggering thing.

There’s a concept that we call gradients. Gradients are just how information gets more crystalized. If I feel like I’m going to learn something by going one direction, that direction is the gradient. It’s sort of a pointer in the way I should go. That can exist in the chemical world as well, and things like slime mold actually use chemical gradients that translate into information processing and actually learn dynamics of a system. Our neurons do that. Deep neural networks do that in a computer system. They’re all based on something similar at one level.

So, let’s talk about the nematode worm for a minute.

Okay.

You’ve got this worm, the most successful creature on the planet. Seventy percent of all animals are nematode worms. He’s got 302 neurons and exhibits certain kinds of complex behavior. There have been a bunch of people in the OpenWorm Project, who spent 20 years trying to model those 302 neurons in a computer, just to get it to duplicate what the nematode does. Even among them, they say: “We’re not even sure if this is possible.” So, why are we having such a hard time with such a simple thing as a nematode worm?

Well, I think this is a bit of a fallacy of reductive thinking here, that, “Hey, if I can understand the 302 neurons, then I can understand the 86 billion neurons in the human brain.” I think that fallacy falls apart because there are different emergent properties that happen when we go from one size system to another. It’s like running a company of 50 people is not the same as running a company of 50,000. It’s very different.

But, to jump in there… my question wasn’t, “Why doesn’t the nematode worm tell us something about human intelligence?” My question was simply, “Why don’t we understand how a nematode worm works?”

Right. I was going to get to that. I think there are a few reasons for that. One is, interaction of any complex system – hundreds of elements – is extremely complicated. There’s a concept in physics called the three-body problem, where if I have two pool balls on a pool table, I can actually 100 percent predict where the balls will end up if I know the initial state and I know how much energy I’m injecting when I hit one of the balls in one direction with a certain force. If you make that three, I cannot do that in a closed form system. I have to simulate steps along the way. That is called a three-body problem, and it’s computationally intractable to compute that. So, you can imagine when it gets to 302, it gets even more difficult.

And what we see in big systems like in mammalian brains, where we have billions of neurons, and 300 neurons, is that you actually have pockets of closely interacting pieces in a big brain that interact at a higher level. That’s what I was getting at when I talked about these emergent properties. So, you still have that 302-body problem, if you will, in a big brain as you do in a small brain. That complexity hasn’t gone away, even though it seemingly is a much simpler system The interaction between 302 different things, even when you know precisely how each one of them is connected, is just a very complex matter. If you try to model all the interactions and you’re off by just a little bit on any one of those things, the entire system may not work. That’s why we don’t understand it, because you can’t characterize every piece of this, like every synapse… you can’t mathematically characterize it. And if you don’t get it perfect, you won’t get a system that functions properly.

So, do you say that suggesting by extension that the Human Brain Project in Europe, which really is… You’re laughing and nodding. What’s your take on that?

I am not a fan of the Human Brain Project for this exact reason. The complexity of the system is just incredibly high, and if you’re off by one tiny parameter, by a tiny little amount, it’s sort of like the butterfly effect. It can have huge consequences on the operation of the system, and you really haven’t learned anything. All you’ve learned how to do is model some microdynamics of a system. You haven’t really gotten any true understanding of how the system really works.

You know, I had a guest on the show, Nova Spivack, who said that a single neuron may turn out to be as complicated as a supercomputer, and it may even operate down at the Planck level. It’s an incredibly complex thing.

Yeah.

Is that possible?

It is a physical system – a physical device. One could argue the same thing about a single transistor as well. We engineer these things to act within certain bounds… and I believe the brain actually takes advantage of that as well. So, a neuron… to completely, accurately describe everything a neuron is doing, you’re absolutely right. It could take a supercomputer to do so, but we don’t necessarily need to abstract a supercomputer’s worth of value from each neuron. I think that’s a fallacy.

There are lots of nonlinear effects and all this kind of crazy stuff that are happening that really aren’t useful to the overall function of the brain. Just like an individual neuron can do very complicated things, when we put a whole bunch of [transistors] together to build a processor, we’re exploiting one piece of the way that transistor behaves to make that processor work. We’re not exploiting everything in the realm of possibility that the transistor can do.

We’re going to get to artificial intelligence in a minute. It’s always great to have a neuroscientist on the show. So, we have these brains, and you said they exhibit emergent properties. Emergence is of course the phenomenon where the whole of something takes on characteristics that none of the components have. And it’s often thought of in two variants. One is weak emergence, where once you see the emergent behavior, with enough study you can kind of reverse engineer… “Ah, I see why that happened.” And one is a much more controversial idea of strong emergence that may not be discernible. The emergent property may not be derivable from the component. Do you think human intelligence is a weak emergent property, or do you believe in strong emergence?

I do in some ways believe in strong emergence. Let me give you the subtlety of that. I don’t necessarily think it can be analytically solved because the system is so complex. What I do believe is that you can characterize the system within certain bounds. It’s much like how a human may solve a problem like playing chess. We don’t actually pre-compute every possibility. We don’t do that sort of a brute force kind of thing. But we do come up with heuristics that are accurate most of the time. And I think the same thing is true with the bounds of a very complex system like the brain. We can come up with bounds of these emergent properties that are accurate 95 percent of time, but we won’t be accurate 100 percent of the time. It’s not going to be as beautiful as some of the physics we have that can describe the world. In fact, even physics might fall into this category as well. So, I guess the short answer to your question is: I do believe in strong emergence that will never actually 100 percent describe…

But, do you think fundamentally intelligence could, given an infinitely large computer, be understood in a reductionist format? Or is there some break in cause and effect along the way, where it would be literally impossible.  Are you saying it’s practically impossible or literally impossible?

…To understand the whole system top to bottom, from the emerging…?

Well, to start with, this is a neuron.

Yeah.

And it does this, and you put 86 billion together and voilà, you have Naveen Rao.

I think it’s literally impossible.

Okay, I’ll go with that. That’s interesting. Why is it literally impossible?

Because the complexity is just too high, and the amount of energy and effort required to get to that level of understanding is many orders of magnitude more complicated than what you’re trying to understand.

So now, let’s talk about the mind for a minute. We talked about the brain, which is physics. To use a definition that most people I think wouldn’t have trouble with, I’m going to call the mind all the capabilities of the brain that seem a little beyond what three pounds of goo should be able to do… like creativity and a sense of humor. Your liver presumably doesn’t have a sense of humor, but your brain does. So where do you think the mind comes from? Or are you going to just say it’s an emergent property?

I do kind of say it’s an emergent property, but it’s not just an emergent property. It’s an emergent property that is actually the coordination of the physics of our brain – the way the brain itself works – and the environment. I don’t believe that a mind exists without the world. You know, a newborn baby, I called intelligent because it has the potential to decompose the world and find meaningful structure within it in which it can act. But if it doesn’t actually do that, it doesn’t have a mind. You can see that… if you had kids yourself. I actually had a newborn while I was studying neuroscience, and it was actually quite interesting to see. I don’t think a newborn baby is really quite sentient yet. That sort of emerges over time as the system interacts with the real world. So, I think the mind is an emergent property of brain plus environments interacting.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

Voices in AI

Visit VoicesInAI.com to access the podcast, or subscribe now:

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.