Google’s massive, artificial neural network has learned to love the Lolcats.
The big brains at Google recently moved humanity significantly closer to extinction by creating a 16,000 CPU neural network with a billion connections and then turning it loose on YouTube. But as it turns out, the AI is less interested in heralding doom for all organic life than it is in the endlessly amusing antics of kittycats.
Three days, ten million random YouTube thumbnails and one “deep learning” algorithm later, the G-brain was able to pick out pictures of cats from a list of 20,000 different items with nearly 75 percent accuracy. Less amusingly, it achieved 81.7 percent accuracy in detecting human faces and 76.7 percent accuracy in identifying human body parts. What makes this achievement remarkable is that the machine wasn’t given any information to help it figure out what was what; instead, it figured things out entirely on its own.
“We never told it during the training, ‘This is a cat’,” said Dr. Jeff Dean, the Google fellow who led the study. “It basically invented the concept of a cat.”
“Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation,” the more scientifically-precise “Building High-Level Features Using Large Scale Unsupervised Learning” paper explains. “We also find that the same network is sensitive to other high-level concepts such as cat faces and human bodies.”
And while even the massive G-brain is “dwarfed” by real human brains, David Bader, the executive director of high-performance computing at the Georgia Tech College of Computing, said that more accurate modeling is coming quickly. “The Stanford/Google paper pushes the envelope on the size and scale of neural networks by an order of magnitude over previous efforts,” he said. “The scale of modeling the full human visual cortex may be within reach before the end of the decade.”
The success of the project has resulted in its relocation from Google’s “X lab” to its primary facility, a move that has almost certainly brought the end of days several steps closer. Even so, Dr. Andrew Ng, Dean’s co-lead on the project, thinks that truly self-teaching machines are still a long way off. “It’d be fantastic if it turns out that all we need to do is take current algorithms and run them bigger, but my gut feeling is that we still don’t quite have the right algorithm yet,” he said.
Then again, if it digs cat videos on the internet, how far off can it be?
Source: New York Times
(photo)
Published: Jun 26, 2012 05:58 pm