Google doesn’t seem to be breaking the speed on artificial intelligence (AI), and it is giving them satisfying results.
Aside creating a machine that beat the world’s board game champions, Google has been able to create an AI that is capable of designing its own AI.
To make it even more interesting, in a matter of months after its creation, the AI created by another AI has gone from analyzing words, to deciphering complex imagery.
In a blog post, Google engineers explain how AutoML system (Automated Machine Learning) performs its functions:
“In our approach (which we call “AutoML”), a controller neural net can propose a “child” model architecture, which can then be trained and evaluated for quality on a particular task. That feedback is then used to inform the controller how to improve its proposals for the next round.“
In simpler terms, a parent AI creates baby AI, then trains it to perform series of tasks. While these baby AI does their function, they send the results to the parent AI, which then, uses the information to improve itself.
The parent AI even add certain design features to the AI babies, features that seem to be of no clear use to their own researchers.
The parent AI can then create better AI babies, and make them more efficient in evaluating and performing tasks, which on the long run, makes the parent AI better informed, and improved. The process can then be repeated over time.
“Our approach can design models that achieve accuracies on par with state-of-art models designed by machine learning experts (including some on our own team!),” the post reads.
Although, AutoML seems to be focused on the ability to create impressive AI children, a recent blog post by Google reveals that there is more to the program than just making babies.
Engineers have been working on making AutoML analyse and process images to recognize distinct features. You might have had to login to a website, and asked to prove you are not a robot – by selecting images on a tile.
Presently, humans are better than AI at dealing with images. However, our days at the top of the leaderboard might be running out, because AutoML can now pick out specific objects in images better than any other computer vision system.
For example, say you’ve got an image of a person climbing a mountain. AutoML’s baby AI, codenamed NASNet, can pick out the individual elements, like a person, their walking stick, their backpack, their clouds, the Sun, and so on. The program can do this at up to 82.7% accuracy.
Months ago, a child AI performed this feat, using data sets that featured catalogues of words and color images.
Knowing that the feat was performed by an AI that was created by another AI, is what makes this astonishing; maybe a little troubling too.
Imagine you need to run some complex computational or monitoring task, but you can’t code, AutoML could make a program for you with very little or no external input.