Google's artificial intelligence creation DeepMind has picked up new tools for learning that have allowed it to achieve its best performance records thus far, all due to the development of a new agent Google calls the UNsupervised REinforcement and Auxiliary Learning, or UNREAL. The agent is a breakthrough in AI research as it allows DeepMind to adapt learning techniques very similar to that of humans.
Google's DeepMind has already proven itself more than capable of past achievements, like recreating human speech, defeating a Go world champion and cutting down the company's power bills, according to Engadget. Now with the UNREAL agent, it is learning game stages ten times faster and can achieve up to 87 percent of expert human performance on Labyrinth.
UNREAL enabled DeepMind to amass these record-breaking self-learning skills by basing it on techniques related to the way animals dream and that babies learn motor skills. Whereas previous skills required training Google's DeepMind over longer swathes of time, UNREAL allows researchers to have the AI learn on its own, according to ZDNet.
The results of the new agent were written by DeepMind researchers on a concept paper entitled "Reinforcement Learning with Unsupervised Auxiliary Tasks." It outlines how the techniques work, saying it uses the way that animals dream about negatively- and positively rewarding events to speed up the learning process and teach DeepMind to utilize visual cues.
"By learning on (sic) rewarding histories much more frequently, the agent can discover visual features predictive of reward much faster," Google wrote on the paper.
These techniques were all tried and tested on 57 Atari Games and 13 different levels on Labyrinth. Not only did UNREAL allow the AI to perform excellently on the games, but it also didn't need to be customized to each one, signaling the dawn of an era where DeepMind can be more flexible and within shorter time frames.
See Now: OnePlus 6: How Different Will It Be From OnePlus 5?