I’ve got a weakness for listicles. The tiniest snippet of information can send me off on an endless research trail. (Have you ever played the Wikipedia game where you keep clicking through the links until, as always, you reach the page about Philosophy? That’s my kind of wormhole fun.)
Anyway, I like listicles. And I like finding bugs. So, it’s natural that I end up watching videos like 10 Strangest Unsolved Video Game Discoveries & Mysteries.
This is where I was reminded of the horrifying potential of AI.
Let me paint the picture:
Last year, researchers at the University of Freiburg were conducting research on 'evolutionary algorithms' and wanted to see how effectively they could teach AI to play retro Atari games. One of these games was an arcade classic: Qbert. In the game you hop on and off platforms. Landing on a cube changes its colour, and upon matching each cube of the pyramid to the target colour, you move up a level. Simple!
So why did the AI kill itself repeatedly? Some suggest it was a deliberate attempt to lure the enemy towards the bottom of the screen and clear the way for its next move, where it embarked on a baffling series of random steps...
It was these bizarre actions that unlocked an infinite point spree, meaning that the AI not only learned to play the game, but subvert instruction and beat it, in 5 hours.
The original creator of the game, Jeff Lee, responded on Twitter: "Since I designed and programmed the original arcade version, I can't really say much about any port. This certainly doesn't look right, but I don't think you'd see the same behavior in the arcade version."
Today, I dug up a cool article about it. It's worth a read.
While the jury's still out on whether today's machine-learning techniques will ever create a program that could rival human intelligence, one thing about the future of artificial intelligence is clear: The machines are really good at playing games. And when the machines get good at the games, sometimes they come up with bizarre strategies and tactics that a human never would.