How Google's Neural Network Hopes To Beat A 'Go' World Champion

Mar 8, 2016
Originally published on March 9, 2016 5:07 am

In South Korea on Wednesday, a human champion of the ancient game of "Go" will square off against a computer programmed by Google DeepMind, an AI company owned by the search giant. If the machine can beat the man over a five-day match, then researchers say it will be a milestone for artificial intelligence.

Here are the key things to know about the match and what it will mean for the future, both of humanity and our robot overlords.

1. A computer won at chess 20 years ago. Go is tougher.

IBM grabbed the headlines when its Deep Blue supercomputer bested world champion Gary Kasparov in 1997.

But chess is a computer's game. It has strict rules and a limited number of moves each turn. Deep Blue gained the upper hand by crunching a huge volume of possible moves to see which ones would lead to a win.

Go is a very different kind of game. Players use stones to fence off territory and capture each other's pieces. It has fewer rules and more choices each turn. In fact, "there are more possible 'Go' positions than there are atoms in the Universe," says Demis Hassabis, a researcher with Google DeepMind.

Computers hate choices. Go is a nightmare for rule-bound computers.

2. This program taught itself how to play.

The Google program, known as "Alpha Go," actually learned the game without much human help. It started by studying a database of about 100,000 human matches, and then continued by playing against itself millions of times.

As it went, it reprogrammed itself and improved. This type of self-learning program is known as a neural network, and it's based on theories of how the human brain works.

AlphaGo consists of two neural networks: The first tries to figure out the best move to play each turn, and the second evaluates who is winning the match overall.

It's far more powerful than any Go-playing computer program to date

3. The machine is not guaranteed to win.

In October, AlphaGo beat a European champion of the game, Fan Hui. But Hui is ranked far below the program's current opponent, Lee Sedol, who is considered among the best Go players in the world. Sedol may still be able to beat AlphaGo.

Nevertheless, the overall approach is clearly working, and soon AlphaGo, or another similar program, likely will overtake the world's best

4. This program will not lead to a dystopian future in which humanity is enslaved by killer robots. At least not for a few more years.

The deep-learning approach is making great strides. It's getting particularly good at recognizing images (and more creepily, human faces).

But skull-crushing mechanical suzerain? Probably not. For one thing, physical robots still suck. Seriously. They're just terrible.

And Google has a rosier purpose in mind anyway. It hopes programs such as AlphaGo can improve language translation and health care tools. It might even someday be used to build a sophisticated virtual assistant. "I've concluded that the winner here, no matter what happens, is humanity," Eric Schmidt, the chairman of Google's parent company, Alphabet, said in a pre-match news conference.

Regardless of what you think about AI, it seems likely this sort of program will change the way we live and work in the years ahead.

Copyright 2017 NPR. To see more, visit http://www.npr.org/.

KELLY MCEVERS, HOST:

All right, now for a story of man versus machine. Starting tomorrow in South Korea, a human champion of the board game known as Go will square off against a computer that's been programmed by Google. If the machine beats the man - and it is a man - over the five-day match, then researchers say it will be a milestone in the quest for artificial intelligence. Joining me to talk about this is NPR science correspondent Geoff Brumfiel. And Geoff, can you tell us what Go is?

GEOFF BRUMFIEL, BYLINE: So yeah, Go is this ancient game that originated thousands of years ago in China. There are two players, there's a big board, one player has white stones, the other has black stones. They take turns trying to fence off territory and capture each other's pieces. I took an online tutorial this morning, and it's not that hard. I got the basics in about 10 minutes or so.

MCEVERS: And of course, you learned it from a computer.

BRUMFIEL: Yeah, I suppose ironically, I did. That's a good point.

MCEVERS: So, I mean, computers have beaten people before - right? - in things like chess. If Go is so easy to learn, why is this match-up a big deal?

BRUMFIEL: The thing about Go is that there are just way more possible moves each turn than chess. So back in 1997 when Deep Blue, that IBM computer, beat Gary Kasparov, it literally just crunched all the possible moves. You can't do that with Go. There are just too many options each turn. And way more choices are bad news for computers. They're bad at making choices when they have a lot of options.

MCEVERS: Well, let's talk about Google's program for this. I mean, what makes it so special?

BRUMFIEL: The real key to this program is that it learns. In fact, the entire approach is called deep learning. Basically, the computer teaches itself almost from scratch. First, it studied an online database of about 100,000 human matches, and then it played itself millions of times. And as it played, it got better. It reprogrammed itself and basically improved. And that's the key to this whole thing.

MCEVERS: Oh yeah. I mean, that sounds much more like a human.

BRUMFIEL: That's right. And actually, this is based on the way neuroscientists think the brain works. The fancy name for this kind of program is a neural network. Now this particular one is actually two subprograms. One studies each turn, what the best move to make is, and the other one just spends its whole time trying to figure out who's winning. And it works pretty well. In October, this program beat a European champion.

MCEVERS: So what do you think? Is the computer going to win?

BRUMFIEL: I think it's actually hard to say. The European champion was ranked something like 489 on goratings.org, the popular Go rating website. Lee Sidol, the guy who they're playing, is number four on that site. He's been called a world champion. I think he may be able to best this machine based on what I've been reading, but what is clear is overall, this approach is working. And sooner or later, computers are going to master Go.

MCEVERS: All right, the real question here, Geoff, then of course is should we fear the rise of machines?

BRUMFIEL: So Eric Schmidt, the chairman of Alphabet, which is Google's parent company, was at a pre-match press conference and here's what he said.

(SOUNDBITE OF ARCHIVED RECORDING)

ERIC SCHMIDT: I've concluded that the winner here, no matter what happens, is humanity.

BRUMFIEL: So that's his take. My view on this is that these neural networks are powerful in a lot of ways, but the way they learn is still pretty narrow. They can't do everything under the sun, and they certainly can't think the way humans do. So Kelly, for now, you know, an AI host is not going to take over ALL THINGS CONSIDERED.

MCEVERS: OK, good (laughter). NPR science correspondent Geoff Brumfiel, thank you.

BRUMFIEL: Thanks. Transcript provided by NPR, Copyright NPR.