Share Tweet Share Email Print

New research will model hearing

OHSU research will focus on machine-learning software
Image of person's finger pointing to diagram on a computer of a brain, and graphs
A new grant will enable researchers will develop computational tools to study central auditory processing. (OHSU/Kristyna Wentz-Graff)

Scientists at Oregon Health & Science University will use a new $1 million grant from the federal BRAIN Initiative to create machine-learning software that mimics the brain’s ability to discern important sounds, like understanding speech, in a noisy environment.

Ultimately, the initiative could result in treatments for people affected by peripheral hearing loss or central auditory processing disorder.

Stephen David, Ph.D.
Stephen David, Ph.D.

The grant will enable researchers to fit and evaluate a large number of computational models, said principal investigator Stephen David, Ph.D., associate professor of otolaryngology and behavioral neuroscience in the OHSU School of Medicine. The grant, officially awarded in September, is funding three years of work involving a collaboration between David’s lab and the lab of Nima Mesgarani, Ph.D., an associate professor of electrical engineering at Columbia University.

Together, they will develop computational tools to study central auditory processing.

“Fundamentally, we’re exploring a basic question about how the brain works,” said David, who is also part of the Oregon Hearing Research Center at OHSU.

Computer engineers have already applied machine learning concepts to create voice-recognition software and other forms of artificial intelligence intended to perform human activities.

The new initiative will build a software library that will enable researchers around the world to develop models mimicking the brain’s ability to discern distinct sounds and focus attention.

Think of carrying on a conversation along a busy street, tuning out background noise in an office, or recognizing your own child’s call in a noisy park. Researchers believe deep neural networks are the key to understanding the brain’s ability to screen out the noise and form a clear perception.

With acoustic trauma or age-related hearing loss, these signals become muddled.

“The fact that deep neural networks can replicate human behavior suggests this architecture captures what’s actually going on in the brain during sensory processing,” David said. “These models tell us how we screen the important from the unimportant.”

Launched in 2013, the National Institutes of Health Brain Research through Advancing Innovative Neurotechnologies (BRAIN) initiative is a large-scale effort to accelerate neuroscience. The federal initiative has provided about $1.3 billion over that time to develop tools to improve understanding of the brain.

In addition to David, two researchers from the OHSU Vollum Institute, Tianyi Mao, Ph.D., and Haining Zhong, Ph.D., received a BRAIN Initiative grant in 2018 to study the brain’s amygdala, the small almond-shaped structure deep within the brain that contains circuitry critical for emotion.

Previous Story Unintended pregnancy rates higher among women with disabilities, study says