Automated techniques could make it easier to develop AI
[ad_1]
“BERT takes months of computation and is very expensive—like, a million dollars to generate that model and repeat those processes,” Bahrami says. “So if everyone wants to do the same thing, then it’s expensive—it’s not energy efficient, not good for the world.”
Although the field shows promise, researchers are still searching for ways to make autoML techniques more computationally efficient. For example, methods like neural architecture search currently build and test many different models to find the best fit, and the energy it takes to complete all those iterations can be significant.
AutoML techniques can also be applied to machine-learning algorithms that don’t involve neural networks, like creating random decision forests or support-vector machines to classify data. Research in those areas is further along, with many coding libraries already available for people who want to incorporate autoML techniques into their projects.
The next step is to use autoML to quantify uncertainty and address questions of trustworthiness and fairness in the algorithms, says Hutter, a conference organizer. In that vision, standards around trustworthiness and fairness would be akin to any other machine-learning constraints, like accuracy. And autoML could capture and automatically correct biases found in those algorithms before they’re released.
The search continues
But for something like deep learning, autoML still has a long way to go. Data used to train deep-learning models, like images, documents, and recorded speech, is usually dense and complicated. It takes immense computational power to handle. The cost and time for training these models can be prohibitive for anyone other than researchers working at deep-pocketed private companies.
One of the competitions at the conference asked participants to develop energy-efficient alternative algorithms for neural architecture search. It’s a considerable challenge because this technique has infamous computational demands. It automatically cycles through countless deep-learning models to help researchers pick the right one for their application, but the process can take months and cost over a million dollars.
The goal of these alternative algorithms, called zero-cost neural architecture search proxies, is to make neural architecture search more accessible and environmentally friendly by significantly cutting down on its appetite for computation. The result takes only a few seconds to run, instead of months. These techniques are still in the early stages of development and are often unreliable, but machine-learning researchers predict that they have the potential to make the model selection process much more efficient.
[ad_2]
Source link