The rise of the machines is at hand. Advances in AI are moving at an astronomical pace. Hidden interactions are performed daily by artificial intelligence. Simple internet searches are carried out using artificial intelligence. What might come next as machines are built to learn and think more like humans?
Recently Google unveiled their advanced AI named AlphaGo beat a human grandmaster at the ancient game of Go. Several companies, Google and Facebook included, raced to create a machine that could learn the complex game and beat a human opponent. Go is a game more elaborate than chess where each move allows approximately 250 other moves while chess averages 35 possible moves per turn. Because of the complicated nature of Go, it was chosen as an excellent platform for AI development. But why chase such a dream?
Google hopes to use the lessons learned from their AlphaGo program to enhance the company’s main focus – internet searches. Currently, Google uses a form of AI called RankBrain within their Google Ads division and have used that technology to aid in search queries. They rely on a system that mimics the human brain and is capable of learning over time, in turn leading to faster results that don’t adhere to current search algorithms. The results tend to be more accurate than those produced using current methods.
There are detractors to the rush for smarter AI. Elon Musk, Bill Gates, Stephen Hawking, and philosophers like Nick Bostrom all sound the alarm regarding detrimental effects of AI development. There are benefits to creating machines that can think on their own and figure out complex tasks the way a human brain can function, but there are inherent dangers in creating those machines. Developers and researchers claim we have complete control over such systems. All it takes is one rogue machine or even someone with malicious intent to turn those machines into something worse.
As the field of Artificial Intelligence progresses, researchers and lay people alike, ought to stay aware of the possibilities ahead. It’s not fear mongering to consider what negative outcomes could happen if such systems were to fall in the wrong hands or they were to develop conclusions on their own which impact humanity in an adverse way. It sounds like Science Fiction but it’s closer than ever. Advances in technology are moving at a rapid pace. It’s important to proceed with caution. Maybe it will amount to nothing, but vigilance isn’t a bad thing.