In January 2014, his three-year-old high-tech start-up was acquired by Google for $625 million. This week, more than 100 million people watched as a computer program using his company’s technology prevailed in a match against a human master of Go, an ancient Asian game renowned for its complexity — a feat considered greater than the 1997 victory of IBM’s Deep Blue over world chess champion Gary Kasparov.
According to DeepMind co-founder Demis Hassabis, the research behind DeepMind’s technology will drive huge developments in the areas of medicine, robotics and even climate change.
Last year, to help answer the questions, “Who is Demis Hassabis?” and “How did this London-born Greek Cypriot turn a three year-old company into Google’s largest European acquisition?,” we shared this look at his background, along with Demis’s four tips for building a blue-sky start-up.
When it comes to this latest achievement — AlphaGo’s 4-1 victory over South Korean Go master Lee Se-dol — artificial intelligence experts had predicted that a computer program would need at least ten more years of development before it would be able to beat a Go master at this level.
Mr. Hassabis calls Go the “most profound game humankind has devised.”
As Hassabis told the Verge last week, “Go has always been the pinnacle of perfect information games. It’s way more complicated than chess in terms of possibility, so it’s always been a bit of a holy grail or grand challenge for AI research, especially since Deep Blue… Something like DeepMind was always my ultimate goal. I’d been planning it for more than 20 years, in a way.”
As Choe Sang-Hun reports in the New York Times, Go is a two-person game of strategy said to have been created in China more than 3,000 years ago, in which players compete for territory by placing black and white stones on intersections of a board of 19 horizontal and 19 vertical lines.
Millions of Go fans in Asia watched intently as AlphaGo won the fourth game of the best-of-five match, in a five hour duel between the computer and the human Go master, during which each player was given one minute to deliberate and foresee complex moves and countermoves before placing a stone.
But, as Choe Sang-Hun reports, AlphaGo’s 4-1 victory is more than just a historic stride for computer programmers and artificial intelligence researchers trying to create software that can outwit humans in board games.
WHY DEEPMIND’S VICTORY IS SO IMPORTANT
Why is beating a master at Go is so important? Writing in a New York Times op-ed, Andrew McAfee and Erik Brynjolfsson explain that, unlike chess, no human can explain how to play Go at the highest levels. The top players, it turns out, can’t fully access their own knowledge about how they’re able to perform so well.
On top of that, there are many more possible Go games than there are atoms in the universe, so even the fastest computers can’t simulate a meaningful fraction of them.
To make matters worse, it’s usually far from clear which possible moves to even start exploring.
The AlphaGo victories vividly illustrate the power of a new approach to artificial intelligence.
Instead of trying to program smart strategies into a computer, the DeepMind team builds systems that can learn winning strategies almost entirely on their own, by seeing examples of successes and failures.
The examples came from huge libraries of Go matches between top players amassed over the game’s 2,500-year history. To understand the strategies that led to victory in these games, the system made use of an approach known as deep learning, which has demonstrated remarkable abilities to tease out patterns and understand what’s important in large pools of information.
AlphaGo also played millions of games against itself, using another technique called reinforcement learning to remember the moves and strategies that worked well.
McAfee and Erik Brynjolfsson write that these two approaches — deep learning and reinforcement learning — have both been around for a while, but until recently it was not at all clear how powerful they were, and how far they could be extended. In fact, it’s still not, but AI applications based on these approaches are improving at a gallop, with no end in sight. And the applications are broad, ranging from speech recognition to credit card fraud detection, and radiology and pathology.
AN INTERVIEW WITH DEMIS HASSABIS
The implications of the technology behind DeepMind’s programming are profound. In this interview with Sam Byford of The Verge, Demis Hassabis shares his thoughts on the future impact of his research on everything from from smartphones to particle physics.
The main future uses of AI that you’ve brought up this week have been healthcare, smartphone assistants, and robotics. How is all of this expected to fit into Google’s product roadmap or business model in general?
We have a pretty free rein over what we want to do to optimize the research progress. That’s our mission, and that’s why we joined Google, so that we could turbocharge that. And that’s happened over the last couple of years. Of course, we actually work on a lot of internal Google product things, but they’re all quite early stage, so they’re not ready to be talked about. Certainly a smartphone assistant is something I think is very core — I think Sundar [Pichai] has talked a lot about that as very core to Google’s future.
Of all the future possibilities [for applications] you’ve identified, [the smartphone] is the one that’s most obviously connected to Google as a whole.
Could you give a timeframe for when some of these things might start making a noticeable difference to the phones that people use?
AI, I think, in the next two to three years you’ll start seeing it. I mean, it’ll be quite subtle to begin with, certain aspects will just work better. Maybe looking four to five, five-plus years away you’ll start seeing a big step change in capabilities.
What are the most immediate use cases for learning robots that you can see?
Self-driving cars are kind of robots but they’re mostly narrow AI currently, although they use aspects of learning AI for the computer vision — Tesla uses pretty much standard off-the-shelf computer vision technology which is based on deep learning. I’m sure Japan’s thinking a lot about things like elderly care bots, or household cleaning bots, I think, would be extremely useful for society. Especially in demographics with an aging population, which I think is quite a pressing problem.
What are the most immediate use cases for learning robots that you can see?
Well, you just have to think “Why don’t we have those things yet?” Why don’t we have a robot that can clean up your house after you? The reason is, everyone’s house is very different in terms of layout, furniture, and so on, and even within your own house, the house state is different from day to day — sometimes it’ll be messy, sometimes it’ll be clean. So there’s no way you can pre-program a robot with the solution for sorting out your house, right? And you also might want to take into account your personal preferences about how you want your clothes folded. That’s actually a very complicated problem. We think of these things as really easy for people to do, but actually we’re dealing with hugely complex things.
I wonder about when we get to more advanced robots, where the tipping point of “good enough” is going to be. Are we going to stop before meaningful human-level interaction and work around the quirks?
Yeah, I mean, probably. I think everyone would buy a reasonably priced robot that could stack the dishes and clean up after you — these pretty dumb vacuum cleaners are quite popular anyway, and they don’t have any intelligence really. So yeah, I think every step of the way, incrementally, there’ll be useful things.
So what are your far-off expectations for how humans, robots, and AIs will interact in the future? Obviously people’s heads go to pretty wild sci-fi places.
I don’t think much about robotics myself personally. What I’m really excited to use this kind of AI for is science, and advancing that faster. I’d like to see AI-assisted science where you have effectively AI research assistants that do a lot of the drudgery work and surface interesting articles, find structure in vast amounts of data, and then surface that to the human experts and scientists who can make quicker breakthroughs.
I was giving a talk at CERN a few months ago; obviously they create more data than pretty much anyone on the planet, and for all we know there could be new particles sitting on their massive hard drives somewhere and no-one’s got around to analyzing that because there’s just so much data. So I think it’d be cool if one day an AI was involved in finding a new particle.
1. Be about 5 years ahead. You need novelty and a competitive/intellectual advantage. Keep on learning and improving, which will allow you to identify a specific niche you can overtake. But don’t be too innovative (e.g. 50 years ahead). Read more at: hellenext.org