Saturday, December 15, 2018

Musings on the AI apocalypse

I was watching the video video below and felt like waxing philosophic for a moment. 

I usually hate it when people ask me about the the robot uprising. Is a narrow and misconstrued question that doesn't take into account that we know very little about artificial intelligence or intelligence in general. And we don't know what we don't know. 

The fear from experts is brought on by the fact that the technologies are accelerating. The fact that processing will be twice what it is currently in a just a couple of years, and so on, creates an unknowable horizon of what will be possible. That all being said, we are currently no where near to a dangerous AI. The "awesome" systems of "AI" and "Machine Learning," propped up by bloggers and used by Google, are based on algorithms from the 60's that can be fooled into thinking a turtle is a gun (MIT Paper from last year). 

But that doesn't mean that they are not dangerous. We are in a Jurassic Park situation where people will get so excited "to see if they could, they won't stop to see if they should." And that can lead to an AI apocalypse detailed in the video below. A situation where a single-function stupid-supersmart program finds a "unique" solution to a problem. It is not a malicious Terminator, it is simply machine error.