USC’s cutting-edge biometrics research receives technology-transfer government contract

USC Information Science Institute researchers are pioneering an effort to identify and prevent spoofing attacks on biometrics systems. (Photo illustration/iStock)

Science/Technology

USC’s cutting-edge biometrics research receives technology-transfer government contract

Cyberattacks are increasingly sophisticated as institutions of all kinds turn to biometrics to confirm user identity. USC Information Sciences Institute researchers are on the front lines, developing systems to guard against security breaches and hacking attempts.

April 07, 2022 USC staff

In an age when we are dependent on biometric authentication processes such as fingerprint and iris recognition to perform day-to-day tasks, theft of biometric data can put anyone at great risk. From generating realistic-looking masks that hijack facial recognition structures to replicating fingerprint and iris patterns, these spoofing attacks can take many forms and have become increasingly prevalent in our digital world.

With additional government funding, USC Information Science Institute researchers from the Visual Intelligence and Multimedia Analytics (VIMAL) group are pioneering an effort to identify and prevent spoofing attacks on biometrics systems. They have dubbed it the “Biometric Authentication with Timeless Learner” — or BATL research project.

It is one of many projects underway at USC that are creating AI-based tools for the public good.

“My group is dedicated to ethical applications of artificial intelligence,” said Wael AbdAlmageed, research director at ISI who leads the team whose results have recently appeared in journals such as IEEE Sensors. “We want to use artificial intelligence in an ethical way.”

AbdAlmageed, a research professor at the USC Viterbi School of Engineering, leads the five-member VIMAL team at ISI. The team’s research and software development efforts span multiple niches — from computer vision to voice recognition tools. Together they have even developed an algorithm that accurately flags deepfakes — AI-generated videos that propagate disinformation and misinformation in media and on social platforms.

The other team members are Mohamed Hussein, a research lead and member of the VIMAL group; Hengameh Mirzaalian, a machine-learning and computer vision scientist; Leonidas Spinoulas, research scientist at ISI; and Joe Mathai, research programmer at ISI. The VIMAL team collaborated with Sebastien Marcel and his colleagues at the Switzerland-based Idiap Research Institute, a nonprofit institute focused on biometric research and machine learning.

The VIMAL group generates complex and ever-evolving machine learning and AI algorithms that are resistant to these spoofing attacks as hackers attempt to access critical personal and financial data. The research is sponsored by the U.S. ‘s (IARPA) Odin program, which invests in cutting-edge research in the intelligence community. IARPA is under the U.S. Office of the Director of National Intelligence and is based in Washington, D.C.

Since the initial conception of the proposal five years ago, the USC team has made unprecedented strides in the biometrics research community by turning an abstract idea into a patented, working product. Their achievements have no doubt caught the attention of the research community and beyond. Within the past year, the team has received a research extension from the government to transfer their technology to several federal agencies going forward.

This level of recognition places USC ISI at Marina del Rey at the forefront of biometrics research and expands the project’s influence far beyond the research community.

Trailblazing advancements

Several major improvements have been made to the biometrics model since last year.

Described by Odin Program Manager Lars Ericson at IARPA as “trailblazing,” the VIMAL team at USC’s ISI developed a first-of-kind algorithm that explains the decisions made by the biometric anti-spoofing system using natural language. For security analysts, this means an easier and more accessible understanding of the thought process that goes behind determining whether something is tagged as a spoofing attempt. This research will be presented at the 2021 IEEE International Conference on Automatic Face and Gesture Recognition.

In an age when technology is constantly advancing, it’s inevitable that new never-seen-before spoofing attacks will emerge.

“The main challenge was the ability to identify unknown spoofing attacks and continuously learn them,” said AbdAlmageed.

To boost the system’s adaptability and security against spoofing attacks, improvements have been made to create new machine-learning algorithms. This will ensure that the system continuously learns how to detect new spoofing attacks as they emerge.

In addition, the team has also implemented more robust and sophisticated AI models that can seamlessly adapt to new environments, as well as a compact and lightweight sensor that can be easily manufactured and deployed.

Widening use of biometrics

There are countless promising uses of biometrics technology. For example, Japan used facial recognition technology as tool to prevent the spread of coronavirus at its  Olympic Games. The implementation of biometrics marked the first time that the Olympics had ever used the technology, proving that biometrics are useful not only for security purposes, but also a large-scale method of ensuring public health.

While biometrics research extends far beyond the lab, VIMAL’s work is an example of deploying biometric data and technology on a large scale.

Despite this major advancement, the use of biometric technology outside of strict regulatory boundaries remains dubious.

“Generally speaking, I am not in favor of using facial recognition without very clear and transparent regulations in terms of how the technology will be used,” said AbdAlmageed.

AbdAlmageed regards his research as critical for keeping ahead of the developments and security concerns that accompany them. By developing new and better technology, AbdAlmageed can advise on the ethical use of machine learning, provide new tools to help doctors with diagnoses of conditions that affect facial features. Through his work, he can even curtail cyberattacks and slow the spread of misinformation that disrupt elections and harm public health.

“AI is not mature enough to be used in the world without safeguards. The way we use it in my lab, whether it’s [for] deepfakes or for helping doctors, we know exactly what the limitations are,” AbdAlmageed said.