University

Research into machine-learning specialty finds new home at USC Viterbi

A $1.5 million grant from the Defense Advanced Research Projects Agency will help USC build a foundation in the burgeoning field of transfer learning.

November 14, 2019 Ben Paul

With a new $1.5 million grant, the growing field of transfer learning has come to the Ming Hsieh Department of Electrical and Computer Engineering at the USC Viterbi School of Engineering.

The grant was awarded to three professors — Salman Avestimehr, Antonio Ortega and Mahdi Soltanolkotabi — who will work with Ilias Diakonikolas at the University of Wisconsin, Madison, to address the theoretical foundations of this field.

Modern machine learning models are breaking new ground in data science, achieving unprecedented performance on tasks like classifying images in one thousand different image categories. This is achieved by training gigantic neural networks.

“Neural networks work really well because they can be trained on huge amounts of pre-existing data that has previously been tagged and collected,” said Avestimehr, the primary investigator of the project. “But how can we train a neural network in scenarios with very limited samples, by for example leveraging [or transferring] the knowledge from a related problem that we have already solved? This is called transfer learning.”

Tackling the challenges of transfer learning

Situations that humans can easily adapt to still cause problems for neural networks. Take navigation: A robot may be trained to navigate effectively in New York City, but if you drop that same robot on the streets of Shanghai it usually fails. Faced with new data in the form of unrecognized street signs and a changed geography and language, this highly advanced neural network is suddenly rendered useless.

The Defense Advanced Research Projects Agency is supporting 14 research teams to tackle challenges of transfer learning, most of which are focused on the application side. The USC Viterbi researchers’ group is one of only three teams focusing on the theoretical foundations.

“We are particularly excited to have an opportunity to focus on solving fundamental questions: What makes it possible to transfer information from one task to another? How much data is needed for a specific task, in addition to information that was transferred?” Ortega said. “Answers to these questions are essential to accelerate progress in machine learning problems for which large amounts of data are not available.”