Skip to Main Content
PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing.

Can AI Train Forensic Interviewers to Unlock Trauma?

USC researchers are investigating whether AI can help decode what children are trying to say (or not say) during interviews about crimes they have witnessed or been subjected to.

January 22, 2019
Kids, Little Girl, Child

Forensic interviews are intended to gather evidence from minors who may have information about a crime under investigation. But many children who have been traumatized—often by those in a position of (supposed) care and authority—are unable to express, or explain, what has happened to them. Even highly trained professionals are at times ill-equipped to decode what a child is really saying. So cases fall apart or justice is not carried out.

Dr. Shri Narayanan, founder of USC's Signal Analysis and Interpretation Laboratory (SAIL), wants artificial intelligence to step in. Ideally, AI would work alongside professionals as an extra "brain" to identify patterns in intonation, speech, and responses that could uncover what the child is unable to say. In a recent interview, Dr. Narayanan explained how that might be possible. Here are edited and condensed excerpts of our conversation.

Dr. Narayanan, while preparing to interview you, I happened to see The Children Act starring Emma Thompson as a (UK) Family Court Judge, who has to ascertain whether to overrule a minor's parents. Due to their religious beliefs, they don't want their son to have a blood transfusion, but, without it, he will die from leukemia. The judge decides to go to the child's hospital bed and interview him. No spoilers here, but an AI in her laptop case would have been useful, as the minor in question gave a highly charged and painful set of twisted verbal signals.
That's such a good example. Yes, we've found, based on our research, that AI can provide valuable insights into a child's mental state, by analyzing signals, which a person, unfamiliar to the child, might well miss, in such a high stakes situation as forensic interviews.

Explain how the AI does this.
At the USC Signal Analysis and Interpretation Laboratory (SAIL), we conduct fundamental and applied research, using engineering methods and tools to understand the human condition, creating technologies, with direct societal relevance, that support and enhance human experiences. The AI, in this legal realm, is looking at the linguistic patterns, decoding dialogue, getting to the nuanced details, to support and enhance the human experience, working alongside the professionals on the case. We are identifying exact level of details of precise information; affect in word choices and vocal intonation. Right now, all this is done in a very subjective way with humans.

Dr. Shri Narayanan

(Dr. Shri Narayanan)

This isn't automation but augmentation? You collaborated with Professor Lyon at USC's Gould Child Interviewing Lab on this work, right?
Exactly. My colleague Dr. Lyon is an expert in this area, working with attorneys and lawyers who interact with victims of maltreatment and abuse. Our AI is designed to complement human intelligence. Because, when we hear something, it is filtered through our own subjectivity and mental models. The AI is an objective training tool, providing extra insights, giving guidance on how to pace the questions, understanding how the mind works, picking up on cognitive and affective aspects from the behavioral cues and suggesting potential hypothesis. We're suggesting our AI is used to train interviewers to build up better skills in these situations; developing more open questioning methods and, in effect, bringing more reproducible analytics in what is very much a subjective realm today.

Through your work at SAIL, you're also an expert in working with children who have autism. I'm assuming this is useful with forensic testimonies, as traumatized children might well display neuro-divergent behavior.
That's very true. Through my work with children with autism, we've developed an understanding of how they can move at their own pace and how there are different rules of engagement. When a child is "different" or in a traumatized state, they may need a little more time to process than the human interviewer is prepared for. That's why conversations often break down and it's possible the human might unwittingly coerce an inaccurate testimony. AI can provide extra help in this difficult situations.

How did you train the AI?
Dr. Lyon provided over 200 anonymous audio transcripts of forensic interviews from child abuse cases. These were transcribed from audio files and then coded for a variety of dimensions including automatically analyzing speech, deep behavioral aspects, and interactions. We then developed custom models for each interview to start the AI on looking for patterns, outliers, and making suggestions for improvement and unlocking insights.

Take us back. You studied electrical engineering up to and beyond doctorate level. When, why, and how did you get into the field of speech and language technologies and AI-based conversational interfaces?
I grew up in India, and almost went to medical school, but ended up in electrical engineering. However, I was always interested in human function and interaction; how the brain worked and so on. Electrical engineering offered me a "systems way" of looking things and I became drawn to exploring the mathematical way of signals processing as it pertained to human functioning. Taking an analytical foundation and developing tools to capture speech and language.

You've also worked extensively within the technology industry, at AT&T Labs-Research and AT&T Bell Labs, culminating as a principal member of its technical staff.
Yes, when I worked at Bell Labs I met all these people working in various aspects of speech and I was fascinated to learn about this amazing signal produced by people with its complex neuro-cognitive underpinnings, in how the vocal instrument is used to create the rich sounds we use to communicate (speech), and how we decode these signals via the ear (hearing). This work continues to endlessly fascinate me, especially when any of these systems get perturbed—and how we are able to continually try to create scientific knowledge via data science, machine learning and information theory.

Which bodies/institutions are funding your current research and to what end? When will your work on 'forensic interviewing' be available commercially to those working in the law?
We have multiple funders for the work in my laboratory, including the National Science Foundation, National Institutes of Health, Google, Simons Foundation, and Department of Defense. We've been very fortunate to have federal, industry, medical, and foundation support.

Right now, the work on forensic interviews is pure research, but we are committed to open-source software, sharing our data and disclosing several patents, where appropriate. We've experienced launching startups from our laboratory, such as Behavioral Signals focused on measuring emotions and behaviors in conversational data. While technology development for supporting forensic interviews is still in the research stage, we are hopeful that a pathway to scale up through commercialization will emerge in the future.

Finally, what's next for you?
We are presenting another AI paper at the AAAI Conference on Artificial Intelligence (AAAI-19), [which begins Jan. 27], and hope to continue on the research and continue publishing our findings and tools in the upcoming year.

Get Our Best Stories!

Sign up for What's New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.


Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

Sign up for other newsletters

TRENDING

About S.C. Stuart

Contributing Writer

S.C. Stuart

S. C. Stuart is an award-winning digital strategist and technology commentator for ELLE China, Esquire Latino, Singularity Hub, and PCMag, covering: artificial intelligence; augmented, virtual, and mixed reality; DARPA; NASA; US Army Cyber Command; sci-fi in Hollywood (including interviews with Spike Jonze and Ridley Scott); and robotics (real-life encounters with over 27 robots and counting).

Read S.C.'s full bio

Read the latest from S.C. Stuart