Dr. Barbara Shinn-Cunningham is a Professor of Cognitive and Neural Systems and Biomedical Engineering at Boston University and a National Security Science and Engineering Faculty Fellow.
April 20, 2010 - As an engineer who also happens to be a musician, studying how we hear was a natural choice for me when I entered graduate school. What I didn’t realize back then was that work I do would have implications for hundreds of thousands of returning American war veterans who have service-related hearing injuries, not to mention the tens of millions of civilian Americans who have hearing loss.
In many social settings, like a cocktail party, multiple sounds reach the ears from all different directions. Normal-hearing, young, healthy listeners are good at focusing on whatever source they are interested in (like the attractive lawyer they just met) and ignoring other sounds (the snob opining about the hint of grapefruit is his chardonnay, the couple bickering about their finances, …). In other words, most listeners are able to filter out unwanted sound sources and focus on what sound is important, a process known as “selective auditory attention”.
Understanding when and how selective auditory attention fails is a problem that has real consequences in every walk of life. Imagine not being able to converse with your spouse at the dinner table because of the rambunctious antics of your three young children, or not being able to understand a command directed at you during a critical moment on a battlefield. Failing to filter out unwanted sounds can lead to catastrophic outcomes, from social isolation to life-threatening decision errors.
Unfortunately, the ability to make sense of the cacophony that reaches our ears is not equally easy for everyone. Even mild hearing loss, early aging, or seemingly modest cognitive disabilities can degrade a listener’s ability to suppress unimportant sound and understand key messages.
My work explores what parts of the brain are involved in selective auditory attention, exploring how different brain areas interact to control what we hear in a complex mixture of sound. By measuring how the brain reacts to different sound settings, we are trying to both understand what can go wrong when listeners have difficulty, and design ways to present sound that is robust and that ensures that critical messages “get through” the noise.
We are working towards next-generation auditory displays and assistive listening devices that take into account how the brain selectively attends to sound. While much of what we are doing is to improve communications for people in stressful, chaotic situations, I am even more motivated by the ways such work can break down barriers that keep many returning veterans from being able to communicate with family and friends in everyday settings.
Tuesday, April 20, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment