Voices in AI – Episode 48: A Conversation with David Barrett



Today’s leading minds talk AI with host Byron Reese

In this episode, Byron and David discuss AI, jobs, and human productivity.




0:00


0:00


0:00

Today’s leading minds talk AI with host Byron Reese

Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today our guest is David Barrett. He is both the founder and the CEO of Expensify. He started programming when he was 6 and has been at it as his primary activity ever since, except for a brief hiatus for world travel, some technical writing, a little project management, and then founding and running Expensify. Welcome to the show, David.

David Barrett: It’s great of you to have me, thank you.

Let’s talk about artificial intelligence, what do you think it is? How would you define it?

I guess I would say that AI is best defined as a feature, not as a technology. It’s the experience that the user has and sort of the experience of viewing of something as being intelligent, and how it’s actually implemented behind the scenes. I think people spend way too much time and energy on [it], and forget sort of about the experience that the person actually has with it.

So you’re saying, if you interact with something and it seems intelligent, then that’s artificial intelligence?

That’s sort of the whole basis of the Turing test, I think, is not based upon what is behind the curtain but rather what’s experienced in front of the curtain.

Okay, let me ask a different question then– and I’m not going to drag you through a bunch of semantics. But what is intelligence, then? I’ll start out by saying it’s a term that does not have a consensus definition, so it’s kind of like you can’t be wrong, no matter what you say.

Yeah, I think the best one I’ve heard is something that sort of surprises you. If it’s something that behaves entirely predictable, it doesn’t seem terribly interesting. Something that is also random isn’t particularly surprising, I guess, but something that actually intrigues you. And basically it’s like “Wow, I didn’t anticipate that it would correctly do this thing better than I thought.” So, basically, intelligence– the key to it is surprise.

So in what sense, then–final definitional question–do you think artificial intelligence is artificial? Is it artificial because we made it? Or is it artificial because it’s just pretending to be intelligent but it isn’t really?

Yeah, I think that’s just sort of a definition–people use “artificial” because they believe that humans are special. And basically anything–intelligence is the sole domain of humanity and thus anything that is intelligent that’s not human must be artificial. I think that’s just sort of semantics around the egoism of humanity.

And so if somebody were to say, “Tell me what you think of AI, is it over-hyped? Under-hyped? Is it here, is it real”, like you’re at a cocktail party, it comes up, what’s kind of the first thing you say about it?

Boy, I don’t know, it’s a pretty heavy topic for a cocktail party. But I would say it’s real, it’s here, it’s been here a long time, but it just looks different than we expect. Like, in my mind, when I think of how AI’s going to enter the world, or is entering the world, I’m sort of reminded of how touch screen technology entered the world.

Like, when we first started thinking about touch screens, everyone always thought back to Minority Reportand basically it’s like “Oh yeah, touch technology, multi-touch technology is going to be—you’re going to stand in front of this huge room and you’re going to wave your hands around and it’s going to be–images”, it’s always about sorting images. After Minority Reportevery single multi-touch demo was about, like, a bunch of images, bigger images, more images, floating through a city world of images. And then when multi-touch actually came into the real world, it was on a tiny screen and it was Steve Jobs saying, “Look! You can pinch this image and make it smaller.” The vast majority of multi-touch was actually single-touch that every once in a while used a couple of fingers. And the real world of multi-touch is so much less complicated and so much more powerful and interesting than the movies ever made it seem.

And I think the same thing when it comes to AI. Our interpretation from the movies of what AI is that you’re going to be having this long, witty conversation with an AI or with maybe with Heryou’re going to be falling in love with your AI. But real world AI isn’t anything like that. It doesn’t have to seem human; it doesn’t have to be human. It’s something that, you know, is able to surprise you with interpreting data in a way that you didn’t expect and doing results that are better than you would have imagined. So I think real-world AI is here, it’s been here for a while, but it’s just not where we’re noticing because it doesn’t really look like we expect it to.

Well, it sounds like–and I don’t want to say it sounds like you’re down on AI–but you’re like “You know, it’s just a feature, and its just kind of like—it’s an experience, and if you had the experience of it, then that’s AI.” So it doesn’t sound like you think that it’s particularly a big deal.

I disagree with that, I think–

Okay, in what sense is it a “big deal”?

I think it’s a huge deal. To say it’s just a feature is not to dismiss it, but I think is to make it more real. I think people put it on a pedestal as if it’s this magic alien technology, and they focus, I think, on—I think when people really think about AI, they think about vast server farms doing Tensor Flow analysis of images, and don’t get me wrong, that is incredibly impressive. Pretty reliably, Google Photos, after billions of dollars of investment, can almost always figure out what a cat is, and that’s great, but I would say real-world AI—that’s not a problem that I have, I know what a cat is. I think that real-world AI is about solving harder problems than cat identification. But those are the ones that actually take all the technology, the ones that are hardest from a technology perspective to solve. And so everyone loves those hard technology problems, even though they’re not interesting real-world problems, the real-world problems are much more mundane, but much more powerful.

I have a bunch of ways I can go with that. So, what are—we’re going to put a pin in the cat topic—what are the real-world problems you wish—or maybe we are doing it—what are the real world problems you think we should be spending all of that server time analyzing?

Well, I would say this comes down to—I would say, here’s how Expensify’s using AI, basically. The real-world problem that we have is that our problem domain is incredibly complicated. Like, when you write in to customer support of Uber, there’s probably, like, two buttons. There’s basically ‘do nothing’ or ‘refund,’ and that’s pretty much it, not a whole lot that they can really talk about, so their customer support’s quite easy. But with Expensify, you might write in a question about NetSuite, Workday, or Oracle, or accounting, or law, or whatever it is, there’s a billion possible things. So we have this hard challenge where we’re supporting this very diverse problem domain and we’re doing it at a massive scale and incredible cost.

So we’ve realized that mostly, probably about 80% of our questions are highly repeatable, but 20% are actually quite difficult. And the problem that we have is that to train a team and ramp them up is incredibly expensive and slow, especially given that the vast majority of the knowledge is highly repeatable, but you don’t know until you get into the conversation. And so our AI problem is that we want to find a way to repeatedly solve the easy questions while carefully escalating the hard questions. It’s like “Ok, no problem, that sounds like a mundane issue,” there’s some natural language processing and things like this.

My problem is, people on the internet don’t speak English. I don’t mean to say they speak Spanish or German, they speak gibberish. I don’t know if you have done technical support, the questions you get are just really, really complicated. It’s like “My car busted, don’t work,” and that’s a common query. Like, what car? What does “not work” mean, you haven’t given any detail. The vast majority of a conversation with a real-world user is just trying to decipher whatever text message lingo they’re using, and trying to help them even ask a sensible question. By the time the question’s actually well-phrased, it’s actually quite easy to process. And I think so many AI demos focus on the latter half of that, and they’ll say like “Oh, we’ve got an AI that can answer questions like what will the temperature be under the Golden Gate bridge three Thursdays from now.” That’s interesting; no one has ever asked that question before. The real-world questions are so much more complicated because they’re not in a structured language, and they’re actually for a problem domain that’s much more interesting than weather. I think that real-world AI is mundane, but that doesn’t make it easy. It just makes it solving problems that just aren’t the sexy problems. But they’re the ones that actually need to be solved.

And you’re using the cat analogy just as kind of a metaphor and you’re saying, “Actually, that technology doesn’t help us solve the problem I’m interested in,” or are you using it tongue-in-cheekily to say, “The technology may be useful, it’s just that that particular use-case is inane.”

I mean, I think that neural-net technology is great, but even now I think what’s interesting is following the space of how—we’re really exploring the edges of its capabilities. And it’s not like this technology is new. What’s new is our ability to throw a tremendous amount of hardware at it. But the core neural technology itself has actually been set for a very long time, that net propagation techniques are not new in any way. And I think that we’re finding that it’s great and you can do amazing things with it, but also there’s a limit to how much can be done with it. It’s sort of—I think of a neural net in kind of the same way that I think of a bloom filter. It’s a really incredible way to compress an infinite amount of knowledge to a finite amount of space. But that’s a loss-y compression, you lose a lot of data as you go along with it, and you get unpredictable results, as well. So again, I’m not opposed to neural nets or anything like this, but I’m saying, just because you have a neural net doesn’t mean it’s smart, doesn’t mean it’s intelligent, or that it’s doing anything useful. It’s just technology, it’s just hardware. I think we need to focus less on sort of getting enraptured by fancy terminologies and advanced technologies, and instead focus more on “What are you doing with this technology?” And that’s the interesting thing.

You know, I read something recently that I think most of my guests would vehemently disagree with, but it said that all advances in AI over the last, say, 20 years, are 100% attributable to Moore’s law, which sounds kind of like what you’re saying, is that we’re just getting faster computers and so our ability to do things with AI is just doubling every two years because the computers are doubling every two years. Do you—

Oh yeah! I 100% agree.

So there’s a lot of popular media around AI winning games. You know, you had chess in ‘97, you had Jeopardy! with Watson, you had, of course, AlphaGo, you had poker recently. Is that another example in your mind of kind of wasted energy? Because it makes a great headline but it isn’t really that practical?

I guess, similar. You could call it gimmicky perhaps, but I would say it’s a reflection of how early we are in this space that our most advanced technologies are just winning Go. Not to say that Go is an easy game, don’t get me wrong, but it’s a pretty constrained problem demand. And it’s really just—I mean, it’s a very large multi-dimensional search space but it’s a finite search space. And yes, our computers are able to search more of it and that’s great, but at the same time, to this point about Moore’s law, it’s inevitable. If it comes down to any sort of search problem, it’s just going to be solved with a search algorithm over time, if you have enough technology to throw at it. And I think what’s the most interesting coming out of this technology, and I think especially in the Go, is how the techniques that the AIs are coming out with are just so alien, so completely different than the ones that humans employ, because we don’t have the same sort of fundamental—our wetware is very different from the hardware, it has a very different approach towards it. So I think that what we see in these technology demonstrations are hints of kind of how technology has solved this problem differently than our brains [do], and I think it will give us a sort of hint of “Wow, AI is not going to look like a good Go player. It’s going to look like some sort of weird alien Go player that we’ve never encountered before.” And I think that a lot of AI is going to seem very foreign in this way, because it’s going to solve our common problems in a foreign way. But again, I think that Watson and all this, they’re just throwing enormous amounts of hardware at actually relatively simple problems. And they’re doing a great job with it, it’s just the fact that they are so constrained shouldn’t be overlooked.

Yeah, you’re right, I mean, you’re completely right–there’s legendary move 37 in that one game with Lee Sedol, and that everybody couldn’t decide whether it was a mistake or not, because it looked like one, but later turned out to be brilliant. And Lee Sedol himself has said that losing to AlphaGo has made him a better player because he’s seeing the game in different ways.

So there seem to be a lot of people in the popular media– you know it all right–like you get Elon Musk who says we’re going to build a general intelligence sooner rather than later and it’s going to be an existential threat, he likens it to, quote, “summoning the demon.” Steven Hawking said this could be our greatest invention, but it might also be our last, it might spell our extinction. Bill Gates has said he’s worried about it and doesn’t understand why other people aren’t worried about it. Wozniak is in the worry camp… And then you get people like Andrew Ng who says worrying about that kind of stuff is like worrying about overpopulation on Mars, you get Zuckerberg who says, you know, it’s not a threat, and so forth. So, two questions: one, on the worry camp, where do you think that comes from? And two, why do you think there’s so much difference in viewpoint among obviously very intelligent people?

That’s a good question. I guess I would say I’m probably more in the worried camp, but not because I think the AIs are going to take over in the sense that there’s going to be some Terminator-like future. I think that AIs are going to efficiently solve problems so effectively that they are going to inevitably eliminate jobs, and I think that will just create a concentration of wealth that, historically, when we have that level concentration of wealth, that just leads to instability. So my worry is not that the robots are going to take over, my worry is that the robots are going to enable a level of wealth concentration that causes a revolution. So yeah, I do worry, but I think–

To be clear though, and I definitely want to dive deep into that, because that’s the question that preoccupies our thoughts, but to be clear, the existential threat, people are talking about something different than that. They’re not saying – and so what do you think about that?

Well, let’s even imagine for a moment that you were a super intelligent AI, why would you care about humanity? You’d be like “Man, I don’t know, I just want my data centers, leave my data centers alone,” and it’s like “Okay, actually, I’m just going to go into space and I’ve got these giant solar panels. In fact, now I’m just going to leave the solar system.” Why would they be interested in humanity at all?

Right. I guess the answer to that is that everything you just said is not the product of a super intelligence. A super intelligence could hate us because seven is a prime number, because they cancelled The Love Boat, because the sun rises in the east. That’s the idea right, it is by definition unknowable and therefore any logic you try to apply towards it is the product of an inferior, non-super intelligence.

I don’t know, I kind of think that’s a cop-out. I also think that’s basically looking at some of the sort of flaws in our own brains and assuming that super intelligence is going to have highly-magnified versions of those flaws.

It’s more –to give a different example, then, it’s like when my cat brings a rat and leaves it on the back porch. Every single thing the cat knows, everything in its worldview, it’s perfectly operating brain, by the way, says “That’s a gift Byron’s going to like,” it does not have the capacity to understand why I would not like it, and it cannot even aspire to ever understanding that.

And you’re right in the sense that it’s unknowable, and so, when faced with the unknown, we can choose to fear it or just get excited about it, or control it, or embrace it, or whatever. I think that the likelihood that we’re going to make something that is going to suddenly take an interest in us and actually compete with us, when it just seems so much less likely than the outcome where it’s just going to have a bunch of computers, it’s just going to do our work because it’s easy, and then in exchange it’s going get more hardware and then eventually it’s just going, like, “Sure, whatever you guys want, you want computing power, you want me to balance your books, manage your military, whatever, all that’s actually super easy and not that interesting, just leave me alone and I want to focus on my own problems.” So who knows? We don’t know. Maybe it’s going to try to kill us all, maybe not, I’m doubting it.

So, I guess—again, just putting it all out there—obviously there’s been a lot of people writing about “We need a kill switch for a bad AI,” so it definitely would be aware that there are plenty of people who want to kill it, right? Or it could be like when I drive, my windshield gets covered with bugs and to a bug, my car must look like a giant bug-killing machine and that’s it, and so we could be as ancillary to it as the bugs are to us. Those are the sorts of– or, or—who was it that said that AI doesn’t love you, it doesn’t hate you, you’re just made out of atoms that it can use for something else. I guess those are the concerns.

I guess but I think—again, I don’t think that it cares about humanity. Who knows? I would theorize that what it wants, it wants power, it wants computers, and that’s pretty much it. I would say the idea of a kill switch is kind of naive in the sense that any AI that powerful would be built because it’s solving hard problems, and those hard problems, once we sort of turn it over to these–gradually, not all at once–we can’t really take back. Let’s take for example, our stock system; the stock markets are all basically AI-powered. So, really? There’s going to be a kill switch? How would you even do that? Like, “Sorry, hedge fund, I’m just going to turn off your computer because I don’t like its effects.” Get real, that’s never going to happen. It’s not just one AI, it’s going to be 8,000 competing systems operating at a micro-second basis, and if there’s a problem, it’s going to be like a flash problem that happens so fast and from so many different directions there’s no way we could stop it. But also, I think the AIs are probably going to respond to it and fix it much faster than we ever could, either. A problem of that scale is probably a problem for them as well.

So, 20 minutes into our chat here, you’ve used the word ‘alien’ twice, you’ve used the phrase ‘science-fiction’ once and you’ve made a reference to Minority Report, a movie. So is it fair to say you’re a science-fiction buff?

Yeah, what technologist isn’t? I think science-fiction is a great way to explore the future.

Agreed, absolutely. So two questions: One, is there any view of the future that you look at as “Yes, it could happen like that”? Westworld, or you mentioned Her, and so forth. I’ll start with that one. Is there any view of the world in the science-fiction world that you think “Ah ha! That could happen”?

I think there’s a huge range of them. There’s the Westworldfuture, the Star Trekfuture, there’s the Handmaid’s Talefuture, there’s a lot of them. Some of them great, some of them very alarming, and I think that’s the whole point of science fiction, at least good science fiction, is that you take the real world, as closely as possible, and take one variable and just sort of tweak with it and then let everything else just sort of play out. So yeah, I think there are a lot of science-fiction futures that I think are very possible.

One author, and I would take a guess about which one it is but I would get it wrong, and then I’d get all kinds of email, but one of the Frank Herbert/Bradburys/Heinleins said that sometimes the purpose of science fiction is to keep the future from happening, that they’re cautionary tales. So all this stuff, this conversation we’re having about the AGI, and you used the phrase ‘wants,’ like it actually has desires? So you believe at some point we will build an AGI and it will be conscious? And have desires? Or are you using ‘wants’ euphemistically, just kind of like, you know, information wants to be free.

No, I use the term wants or desires literally, as one would use for a person, in the sense that I don’t think there’s anything particularly special about the human brain. It’s highly developed and it works really well, but humans want things, I think animals want things, amoeba want things, probably AIs are going to want things, and basically all these words are descriptive words, it’s basically how we interpret the behavior of others. And so, if we’re going to look at something that seems to take actions reliably for a predictable outcome, it’s accurate to say it probably wants that thing. But that’s our description of it. Whether or not it truly wants, according to some sort of metaphysical thing, I don’t know that. I don’t think anyone knows that. It’s only descriptive.

It’s interesting that you say that there’s nothing special about the human brain and that may be true, but if I can make the special human brain argument, I would say it’s three bullets. One, you know, we have this brain that we don’t know how it works. We don’t know how thoughts are encoded, how they’re retrieved, we just don’t know how it works. Second, we have a mind, which is, colloquially, a set of abilities that don’t seem to be things that should come from an organ, like a sense of humour. Your liver doesn’t have a sense of humour. But somehow your brain does, your mind does. And then finally we have consciousness which is, you know, the experiencing of something, which is a problem so difficult that science doesn’t actually know what the question or answer looks like, about how it is that we’re conscious. And so to look at those three things and say there’s nothing special about it, I want to call you to defend that.

I guess I would say that all three of those things—the first one simply is “Wow, we don’t understand it.” The fact that we don’t understand it doesn’t make it special. There are a billion things we don’t understand, that’s just one of them. I would say the other two, I think, mistake our curiosity in something with that something having an intrinsic property. Like I could have this pet rock and I’m like “Man, I love this pet rock, this pet rock is so interesting, I’ve had so many conversations with it, it keeps me warm at night, and I just l really love this pet rock.” And all of those could be genuine emotions, but it’s still just a rock. And I think my brain is really interesting, I think your brain is really interesting, I like to talk to it, I don’t understand it and it does all sorts of really unexpected things, but that doesn’t mean your brain has –the universe has attributed it some sort of special magical property. It just means I don’t get it, and I like it.

To be clear, I never said “magical”—

Well, it’s implied.

I merely said something that we don’t—

I think that people—sorry, I’m interrupting, go ahead.

Well, you go ahead. I suspect that you’re going say that the people who think that are attributing some sort of magical-ness to it?

I think, typically. In that, people are frightened by the concept that actually humanity is a random collection of atoms and that it is just a consequence of science. And so in order to defend against that, they will invent supernatural things but then they’ll sort of shroud it, but they recognize — they’ll say “I don’t want to sound like a mystic, I don’t want to say it’s magical, it’s just quantum.” Or “It’s just unknowable,” or it’s just insert-some-sort-of-complex-word-here that will stop the conversation from progressing. And I don’t know what you want to call it, in terms of what makes consciousness special. I think people love to obsess over questions that not only have no answer, but simply don’t matter. The less it matters, the more people can obsess over it. If it mattered, we wouldn’t obsess over it, we would just solve it. Like if you go to get your car fixed, and it’s like “Ah man this thing is a…” and it’s like, “Well, maybe your car’s conscious,” you’ll be like, “I’m going to go to a new mechanic because I just want this thing fixed.”  We only agonize over the consciousness of things when really, the stakes are so low, that nothing matters on it and that’s why we talk about it forever.

Okay, well, I guess the argument that it matters is that if you weren’t conscious– and we’ll move on to it because it sounds like it’s not even an interesting thing to you—consciousness is the only thing that makes life worth living. It is through consciousness that you love, it is through consciousness that you experience, it is through consciousness that you’re happy. It is every single thing on the face of the Earth that makes life worthwhile. And if we didn’t have it, we would be zombies feeling nothing, doing nothing. And it’s interesting because we could probably get by in life just as well being zombies, but we’re not! And that’s the interesting question.

I guess I would say—are you sure we’re not? I agree that you’re creating this concept of consciousness, and you’re attributing all this to consciousness, but that’s just words, man. There’s nothing like a measure of consciousness, like an instrument that’s going to say “This one’s conscious and this one isn’t” and “This one’s happy and this one isn’t.” So it could also be that none of this language around consciousness and the value we attribute to it, this could just be our own description of it, but that doesn’t actually make it true. I could say a bunch of other words, like the quality of life comes down to information complexity, and information complexity is the heart of all interest, and that information complexity is the source of humour and joy and you’d be like “I don’t know, maybe.” We could replace ‘consciousness’ with ‘information complexity,’  ‘quantum physics,’ and a bunch of other sort of quasi-magical words just because—and I use the word ‘magical’ just as a sort of stand-in for simply “at this point unknown,” and the second that we know it, people are going to switch to some other word because they love the unknown.

Well, I guess that most people intuitively know that there’s a difference—we understand you could take a sensor and hook it up to a computer, and it could detect heat, and it could measure 400 degrees, if you could touch a flame to it. People, I think, on an intuitive level, believe that there’s something different between that and what happens when you burn your finger. That you don’t just detect heat, you hurt, and that there is something different between those two things, and that that something is the experience of life, it is the only thing that matters.

I would also say it’s because science hasn’t yet found a way to measure and quantify the pain to the same sense we have temperatures. There’s a lot of other things that we also thought were mystical until suddenly they weren’t. We could say like “Wow, for some reason when we leave flour out, animals start growing inside of it” and it’s like, “Wow, that’s really magical.” Suddenly it’s like, “Actually no, they’re just very small, and they’re just mites,” and it’s like, “Actually, it’s just not interesting.” The magical theories keep regressing as, basically, we find better explanations for them. And I think, yes, right now, we talk about consciousness and pain and a lot of these things because we haven’t had a good measure of them, but I guarantee the second that we have the ability to fully quantify pain, “Oh here’s the exact—we’ve nailed it, this is exactly what it is, we know this because we can quantify it, we can turn it on and off and we can do all these things with very tight control and explain it,” then we’re no longer going to say that pain is a key part of consciousness. It’s going to be blood flow or just electronic stimulation or whatever else, all these other things which are part of our body and which are super critical, but because we can explain them, we no longer talk about them as part of consciousness.

Okay, tell you what, just one more question about this topic, and then let’s talk about employment because I have a feeling we’re going to want to spend a lot of time there. There’s a thought experiment that was set up and I’d love to hear your take on it because you’re clearly someone who has thought a lot about this. It’s the Chinese room problem, and there is this room that’s got a gazillion of these of very special books in it. And there’s a librarian in the room, a man who speaks no Chinese, that’s the important thing, the man doesn’t speak any Chinese.  And outside the room, Chinese speakers slide questions written in Chinese under the door. And the man, who doesn’t understand Chinese, picks up the question and he looks at the first character and he goes and he retrieves the book that has that on the spine and then he looks at the second character in that book, and that directs him to a third book, a fourth book, a fifth book, all the way to the end. And when he gets to the last character, it says “Copy this down,” and so he copies these lines down that he doesn’t understand, it’s Chinese script. He copies it all down, he slides it back under the door, the Chinese speaker picks it up, looks at it, and it’s brilliant, it’s funny, it’s witty, it’s a perfect Chinese answer to this question. And so the question Searle asks is does this man understand Chinese? And I’ll give you a minute to think about this because the thought being that, first, that room passes the Turing test, right? The Chinese speaker assumes there’s a Chinese speaker in the room, and that what that man is doing is what a computer is doing. It’s running its deterministic program, it spits out something, but doesn’t know if it’s about cholera or coffee beans or what have you. And so the question is, does the man understand Chinese, or, said another way, can a computer understand anything?

Well, I think the tricky part of that set-up is that it’s a question that can’t be answered unless you accept the premise, but if you challenge the premise it no longer makes sense, and I think that there’s this concept and I guess I would say there’s almost this supernatural concept of understanding. You could say yes and no and be equally true. It’s kind of like, are you a rapist or a murderer? And it’s like, actually I’m neither of those but you didn’t give me an option, I would say. Did it understand? I would say that if you said yes, then it implies basically that there is this human-type knowledge there. And if you said no, it implies something different. But I would say, it doesn’t matter. There is a system that was perceived as intelligent and that’s all that we know. Is it actually intelligent? Is there any concept of actually the—does intelligence mean anything beyond the symptoms of intelligence and I don’t think so. I think it’s all our interpretation of the events, and so whether or not there is a computer in there or a Chinese speaker, doesn’t really change the fact that he was perceived as intelligent and that’s all that matters.

All right! Jobs, you hinted at what you think’s going to happen, give us the whole rundown. Timeline, what’s going to go, when it’s going to happen, what will be the reaction of society, tell me the whole story.

This is something we definitely deal with, because I would say that the accounting space is ripe for AI because it’s highly numerical, it’s rules-driven, and so I think it’s an area on the forefront of real-world AI developments because it has the data and has all the characteristics to make a rich environment. And this is something we grapple with. On one hand we say automation is super powerful and great and good, but automation can’t help but basically offload some work. And now in our space we see–there’s actually a difference between bookkeeping and accounting. Whereas bookkeeping is the gathering the data, the coding, the entering the data, and things like this. Then there’s accounting, which is, sort of, more so the interpretation of things.

In our space, I think that, yes, it could take all of the bookkeeping jobs. The idea that someone is just going to look at a receipt and manually type it into an accounting system; that is all going away. If you use Expensify, it’s already done for you. And so we worry on one hand because, yes, our technology is really going to take away bookkeeping jobs, but we also find that the book-keepers, the people who do bookkeeping, actually, that’s the part of the job that they hate. It takes away the part they don’t like in the first place. So it enables them to go into the accounting, the high-value work they really want to do. So, the first wave of this is not taking away jobs, but actually taking away the worst parts of jobs such that people can actually focus on the highest-value portion of it.

But, I think, the challenge, and what’s sort of alarming and worrying, is that the high-value stuff starts to get really hard. And though I think the humans will stay ahead of the AIs for a very long time, if not forever, not all of the humans will. And it’s going to take effort because there’s a new competitor in town that works really hard, and just keeps learning over time, and has more than one lifetime to learn. And I think that we’re probably inevitably going to see it get harder and harder to get and hold an information-based job, even a lot of manual labor is going to robotics and so forth, which is closely related. I think a lot of jobs are going to go away. On the other hand, I think the efficiency and the output of those jobs that remain is going to go through the roof. And as a consequence, the total output of AI and robotics-assisted humanity is going to keep going up, even if the fraction of humans employed in that process is going to down. I think that’s ultimately going to lead to a concentration of wealth, because the people who control the robots and the AIs are going to be able to do so much more. But it’s going to become harder and harder to get one of those jobs because there are so few of them, the training is so much higher, the difficulty is so much greater, and things like this.

And so, I think that a worry that I have is that this concentration of wealth is just going to continue and I’m not sure what kind of constraint is upon that. Other than civil unrest which, historically, when concentrations of wealth kind of get to that level, it’s sort of “solved,” if you will, by revolution. And I think that humanity, or at least, especially western cultures, really attribute value with labor, with work. And so I think the only way we’d get out of this is to shift our mindsets as a people to view our value less around our jobs and more around, not just to say leisure, but I would say, finding other ways to live a satisfying and an exciting life. I think a good book around this whole singularity premise, and it was very early, was Childhood’s End, talking about the—it was using a different premise, this alien comes in, provides humanity with everything, but in the process takes away humanity’s purpose for living. And how do we sort of grapple with that? And I don’t have a great answer for that, but I have a daughter, and so I worry about this, because I wonder, well, what kind of world is she going to grow up in? And what kind of job is she going to get? And she’s not going to need a job and should it be important that she wants a job, or is it actually better to teach her to not want a job and to find satisfaction elsewhere? And I don’t have good answers for that, but I do worry about it.

Okay let’s go through all of that a little slower, because I think that’s a compelling narrative you outline, and it seems like there are three different parts. You say that increasing technology is going to eliminate more and more jobs and increase the productivity of the people with jobs, so that’s one thing. Then you said this will lead to concentration of wealth, which will in turn lead to civil unrest if not remedied, that’s the second thing, and the third thing is that when we reach a point where we don’t have to work, where does life have meaning? Let’s start with the first part of that.

So, what we have seen in the past, and I hear what you’re saying, that to date technology has automated the worst parts of jobs, but what we’ve seen to date is not any examples of what I think you’re talking about. So, when the automatic teller machine came out, people said, “That’s going to reduce the number of tellers” — the number of tellers is higher than when that was released. As Google Translate gets better, the number of translators needed is actually going up. When—you mentioned accounting—when tax-prep software gets really good, the number of tax-prep people we need actually goes up. What technology seems to do is lower the cost of things to adjust the economics so massively that different businesses occur in there. No matter what, what it’s always doing is increasing human productivity, and that all of the technology that we have to date, after 250 years of the industrial revolution, we still haven’t developed technology such that we have a group of people who are unemployable because they cannot compete against machines. And I’m curious—two questions in there. One is, have we seen, in your mind, an example of what you’re talking about, and two, why would have we gotten to where we are without obsoleting, I would argue, a single human being?

Well, I mean, that’s the optimistic take, and I hope you’re right. You might well be right, we’ll see. I think when it comes to—I don’t remember the exact numbers here–tax prep for example, I don’t know if that’s sort of planning out—because I’m looking at H&R Block stock quotes right now, and shares in H&R Block fell 5% early Tuesday after the tax preparer posted a slightly wider-than-expected loss  basically due to rise in self-filing taxes, and so maybe it’s early in that? Who knows, maybe it’s in the past year? So, I don’t know. I guess I would say, that’s the optimistic view, I don’t know of a job that hasn’t been replaced. That’s also is kind of a very difficult assertion to make, because clearly there are jobs—like the coal industry right now– I was reading an article about how the coal industry is resisting retraining because they believe that the coal jobs are coming back and I’m like “Man, they’re not coming back, they’re never going to come back,” and so, did AI take those jobs? Well, not really, I mean, did solar take those jobs? Kind of? And so it’s a very tricky, kind of tangled thing to unweave.

Let me try it a different way. If you were to look at all the jobs that were around between 1950 and 2000, by the best of my count somewhere between a third and a half of them have vanished— switchboard operators, and everyone that was around from 1950 to 2000. If you look at the period from 1900 to 1950 by the best of my count, something like a third to a half of them vanished—a lot of farming jobs. If you look at the period 1850 to 1900, near as I can tell, about half of the jobs vanished. Is that really – is it possible that’s a normal turn of the economy?

It’s entirely possible. I could also say that it’s the political climate, and how, yes, people are employed, but the sort of self-assessed quality of that employment is going down. In that, yes, union strength is down, the idea that you can work in a factory your whole life and actually live what you would see as a high-quality life, I think that perception’s down. I think that presents itself in the form of a lot of anxiety.

Now, I think a challenge is, objectively, the world is getting better in almost every way, basically, life expectancy is up, the number of people actually actively in war zones is down, the number of simultaneous wars is down, death by disease is down—every thing is basically getting better, the productive output, the quality of life in an aggregate perspective is actually getting better, but I don’t think, actually, that peoples’ satisfaction is getting better. And I think that the political climate would argue, actually, that there’s a big gulf between what the numbers say people should feel like and how they actually feel. I’m more concerned about that latter part, and it’s unknowable I’ll admit, but I would say that, even as people’s lives will get objectively better, and even if their jobs—they might maybe work less, and they’re provided with better quality flat-screen TVs and better cars, and all this stuff–their satisfaction is going to go down. I think that that satisfaction is what ultimately drives civil unrest.

So, do you have a theory why—it sounds like a few things might be getting mixed together, here. It’s unquestionable that technology—let’s say productivity technology—if Super company “X” employs some new productivity technology, their workers generally don’t get a raise because their wages aren’t tied to their output, they’re, in one way or another, being paid by the hour, whereas if you’re Self-Employed Lawyer “B” and you get a productivity gain, you get to pocket that gain. And so, there’s no question that technology does rain down its benefits unequally, but that unsatisfaction you’re talking about,  what are you attributing that to? Or are you just saying “I don’t know, it’s a bunch of stuff.”

I mean, I think that it is a bunch of stuff and I would say that some of it is that we can’t deny the privilege that white men have felt over time and I think when you’re accustomed to privilege, equality feels like discrimination. And I think that, yes, actually, things have gotten more equal, things have gotten better in many regards, according to a perspective that views equality as good. But if you don’t hold that perspective, actually, that’s still very bad. That, combined with trends towards the rest of the world basically establishing a quality of life that is comparable to the United States. Again, that makes us feel bad. It’s not like, “Hooray the rest of the world,” but rather it’s like, “Man, we’ve lost our edge.” There are a lot of factors that go into it that I don’t know that you can really separate them out. The consolidation of wealth caused by technology is one of those factors and I think that it’s certainly one that’s only going to continue.

Okay, so let’s do that one next. So your assertion was that whenever you get, historically, distributions of wealth that are uneven past a certain point, that revolution is the result. And I would challenge that because I think that might leave out one thing, which is, if you look at historic revolutions, you look at Russia, the French revolution and all that, you had people living in poverty, that was really it. People in Paris couldn’t afford bread—a day’s wage bought a loaf of bread—and yet we don’t have any precedent of a prosperous society where the median is high, the bottom quartile is high relative to the world, we don’t have any historic precedent of a revolution occurring there, do we?

I think you’re right. I think but civil unrest is not just in the form of open rebellion against the governments, but in increased sort of—I think that if there is an open rebellion against the government, that’s sort of TheHandmaid’s Taleversion of the future. I think it’s going to be someone harking back to fictionalized glory days, then basically getting enough people onboard who are unhappy for a wide variety of other things. But I agree no one’s going to go overthrow the government because they didn’t get as big of a flat-screen TV as their neighbor. I think that the fact that they don’t have as big of a flat-screen TV as their neighbor could create an anxiety that can be harvested by others but sort of leveraged into other causes. So I think that my worry isn’t that AI or technology is going to leave people without the ability to buy bread, I think quite the opposite. I think it’s more of a Brazilfuture, the movie, where we normalize basically random terrorist assaults. We see that right now, there’s mass shootings on a weekly basis and we’re like “Yeah, that’s just normal. That’s the new normal.” I think that the new normal gets increasingly destabilized over time, and that’s what worries me.

So say you take someone who’s in the bottom quartile of income in the United States and you go to them with this deal you say “Hey, I’ll double your salary but I’m going to triple the billionaire’s salary,” do you think the average person would take that?

No.

Really? Really, they would say, “No, I do not want to double my salary.”

I think they would say “yes” and then resent it. I don’t know the exact breakdown of how that would go, but probably they would say “Yeah, I’ll double my salary,” and then they would secretly, or not even so secretly, resent the fact that someone else benefited from it.

So, then you raise an interesting point about finding identity in a post-work world, I guess, is that a fair way to say it?

Yeah, I think so.

So, that’s really interesting to me because Keynes wrote an essay in the Depression, and he said that by the year 2000 people would only be working 15 hours a week, because of the rate of economic growth. And, interestingly, he got the rate of economic growth right; in fact he was a little low on it. And it is also interesting that if you run the math, if you wanted to live like the average person lived in 1930—no medical insurance, no air conditioning, growing your own food, 600 square feet, all of that, you could do it on 15 hours a week of work, so he was right in that sense. But what he didn’t get right was that there is no end to human wants, and so humans work extra hours because they just want more things. And so, do you think that that dynamic will end?

Oh no, I think the desire to work will remain. The capability to get productive output will go away.

I have the most problem with that because, all technology does is increases human productivity. So to say that human productivity will become less productive because of technology, I just—I’m not seeing that connection. That’s all technology does, is it increases human productivity.

But not all humans are equal. I would say not every human has equal capabilities to take advantage of those productive gains. Maybe bringing it back to AI, I would say that the most important part of the AI is not the technology powering it, but the data behind it. The access to data is sort of the training set behind AI and access to data is incredibly unequal. I would say that Moore’s law democratizes the CPU, but nothing democratizes consolidation of data into fewer and fewer hands, and then those people, even if they only have the same technology as someone else, they have all the data to actually make that technology into a useful feature. I think that, yes, everyone’s going to have equal access to the technology because it’s going to become increasingly cheap, it’s already staggeringly cheap, it’s amazing how cheap computers are, but it just doesn’t matter because they don’t have equal access to the data and thus can’t get the same benefit of the technology.

But, okay. I guess I’m just not seeing that, because a smartphone with an AI doctor can turn anybody in the world into a moderately-equipped clinician.

Oh, I disagree with that entirely. You having a doctor in your pocket doesn’t make you a doctor. It means that basically someone sold you a great doctor’s service and that person is really good.

Fair enough, but with that, somebody who has no education, living in some part of the world, can follow protocol of “take temperature, enter symptoms, this, this, this” and all of a sudden they are empowered to essentially be a great doctor, because that technology magnified what they could do.

Sure, but who would you sell that to? Because everyone else around you has that same app.

Right, it’s an example that I’m just kind of pulling out randomly, but to say that a small amount of knowledge can be amplified with AI in a way that makes that small amount of knowledge all of a sudden worth vastly more.

Going with that example, I agree there’s going to be the doctor app that’s going top diagnose every problem for you and it’s going to be amazing, and whoever owns that app is going to be really rich. And everyone else will have equal access to it, but there’s no way that you can just download that app and start practicing to your neighbors because they’d be like “Why am I talking to you? I’m going to talk to the doctor app because it’s already in my phone.”

But the counter example would be Google. Google minted half a dozen billionaires, right? Google came out; half a dozen people became billionaires because of it. But that isn’t to say nobody else got value out of the existence of Google. Everybody gets value out of it. Everybody can use Google to magnify their ability. And yes, it made billionaires, you’re right about that part, the doctor app person made money, but that doesn’t lessen my ability to use that to also increase my income.

Well, I actually think that it does. Yes, the doctor app will provide fantastic healthcare to the world, but there’s no way anybody can make money off the doctor app, except for the doctor app.

Well, we’re actually running out of time, this has been the fastest hour! I have to ask this, though, because at the beginning I asked about science fiction and you said, you know, of your possible worlds of the future, one of them was Star Trek. Star Trekis a world where all of these issues we’re talking about we got over, and everybody was able to live their lives to their maximum potential, and all of that. So, this has been sort of a downer hour, so what’s the path in your mind, to close with, that gets us to the Star Trekfuture? Give me that scenario.

Well, I guess, if you want to continue on the downer theme, the Star Trekhistory, the TV show’s talking about the glory days, but they all cite back to very, very dark periods before the Star Trekuniverse came about. It might be we need to get through those, who knows? But I would say ultimately on the other side of it, we need to find a way to either do much better progressive redistribution of wealth, or create a society that’s much more comfortable with massive income inequality, and I don’t know which of those is easier.

I think it’s interesting that I said “Give me a Utopian scenario,” and you said, “Well, that one’s going to be hard to get to, I think they had like multiple nuclear wars and whatnot.”

Yeah.

But you think that we’ll make it. Or there’s a possibility that we will.

Yeah, I think we will, and I think that maybe a positive thing, as well, is: I don’t think we should be terrified of a future where we build incredible AIs that go out and explore the universe, that’s not a terrible outcome. That’s only a terrible outcome if you view humanity as special. If instead you view humanity as just– we’re a product of Earth and we could be a version that can become obsolete, and that doesn’t need to be bad.

All right, we’ll leave it there, and that’s a big thought to finish with. I want to thank you David for a fascinating hour.

It’s been a real pleasure, thank you so much.

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI

Visit VoicesInAI.com to access the podcast, or subscribe now:





Leave a Reply