Creepily Epic: AI now doesn’t need humans to defeat humans

In the latest breakthrough advancement in the world of AI, Google’s [stock symbol=”GOOGL”] DeepMind announced that its artificial intelligence (AI) algorithm had learned to play Go from scratch without any human help.

Go is an ancient Chinese board game, in which opponents move stones on a board aiming to capture territory. Compared to chess, Go is characterized by larger boards, many more possible moves and longer games.

In 2015, DeepMind’s AI, dubbed AlphaGo, defeated 18-time world champion, South Korean Lee Sedol.

To train the AI to play the game, then engineers had fed the algorithm with thousands of amateur and professional games, which it analyzed and learned form. After that, it started playing against itself, gradually perfecting its game learning from its moves.

The latest iteration, called AlphaGo Zero, had the algorithm start learning to play from scratch, only knowing the rules of the game. It skipped the first step in the previous version and directly started playing against itself.

Before too long, it could surpass human level and beat the previous version by 100 games to 0”.

GIF

In three hours, the AI was at a beginner human level and in 70 hours it was already able to play at super-human level.

It only took it three days to surpass the abilities of the version that beat the world champion Lee Sedol. In 40 days, the new program surpassed all previous versions and arguably became the world’s strongest player.

While it is still early days, AlphaGo Zero constitutes a critical step towards this goal. If similar techniques can be applied to other structured problems, such as protein folding, reducing energy consumption or searching for revolutionary new materials, the resulting breakthroughs have the potential to positively impact society”, the company said.

Watch Professor David Silver of DeepMind’s talk about the new AI