In a crowded room where many people are talking, such as a family birthday party or busy restaurant, our brains have the ability to focus our attention on a single speaker.
Understanding this scenario and how the brain processes stimuli like speech, language, and music has been the research focus of Edmund Lalor, associate professor of Neuroscience and Biomedical Engineering at the University of Rochester Medical Center.
Recently, his lab found a new clue into how the brain is able to unpack this information and intentionally hear one speaker, while weaning out or ignoring a different speaker. The researchers also found brain is actually taking an extra step to understand the words coming from the speaker being listened to, and not taking that step with the other words swirling around the conversation.
"Our findings suggest that the acoustics of both the attended story and the unattended or ignored story are processed similarly. But we found there was a clear distinction between what happened next in the brain,” said Lalor.
For this study, recently published in The Journal of Neuroscience, participants simultaneously listened to two stories, but were asked to focus their attention on only one. Using EEG brainwave recordings, the researchers found the story that participants were instructed to pay attention to was converted into linguistic units known as phonemes, which are units of sound that can distinguish one word from another -- while the other story was not.
"That conversion is the first step towards understanding the attended story. Sounds need to be recognized as corresponding to specific linguistic categories like phonemes and syllables, so that we can ultimately determine what words are being spoken -- even if they sound different -- for example, spoken by people with different accents or different voice pitches," Lalor said.