University of Texas at San Antonio researchers are developing technology to detect brain activity patterns that contribute to persistent developmental stuttering with the goal of reducing the frequency of interruption in the flow of speaking.
Funded by a two-year grant from the National Institutes of Health, the brain-computer interface technology identifies brain activity patterns associated with both successful and stuttered speech through sensors placed on the head that feed the patterns to the computer in real time.
“Because most stuttering happens right at the start when a person begins to say a word, we are taking the assumption it is not word production that is the problem,” said Jeffrey Mock, investigator and assistant professor of research at UTSA. Identifying brain patterns that occur as a person is preparing for speech might help inform treatment approaches, he said.
Stuttering, which includes the repetition of words or syllables, prolonging sounds, or pauses in speech, affects about 3 million people in the United States, according to the National Institute on Deafness and Other Communication Disorders. It is the most commonly diagnosed speech fluency disorder.
Stuttering occurs most often in children between the ages of 2 and 6, and males are three times as likely to stutter as females. About 10 percent of children stutter for some period of their life, but approximately 75 percent recover.
Neurogenic stuttering may occur following a stroke or traumatic brain injury, and in people who have Alzheimer’s disease as the brain struggles to coordinate the regions involved in speaking.
Mock, who has stuttered throughout his life, said that conventional speech therapy includes modification strategies that address physical tension, articulation, breathing regulation, and the rate of speech. By knowing the brain activity patterns associated with both stuttering and normal speech, a speech-language pathologist can develop strategies to train the brain to remain in the state associated with best performance when speaking.
Edward Golob, professor of psychology and the principal investigator of the grant, said training the brain to stay in its optimal state can be achieved through cognitive behavioral therapy, which is used to improve a person’s ability to accept, cope with, and reduce perceived negative effects in addition to the severity of stuttering.
“Even just having the knowledge that their brain is operating in a [good, better, or best] state can help a person improve their speech and stuttering” Golob said, noting that conscious and subconscious processes help guide the brain in the direction of optimal functioning.
The study currently has seven participants, all adults, who visit UTSA weekly to track stuttering patterns. Once the sensors are connected, the computer flashes words to the participant, and they are asked to say the word out loud. Each word, or combination of letters, was chosen with the intention of producing stuttering.
At the UTSA cognitive neuroscience lab last Thursday, researchers asked a visitor to read the words that appeared on the screen, just as the study’s participants would.
After three exclamation points flashed three times on the screen, it read: “covigent zathion.” Pronouncing the made-up words on the screen proved difficult, and resulted in both anxiety and stuttering.
This is on purpose, Golob said, as abnormalities in speech motor control are also relevant to timing and sensory and motor coordination. As the brain works to make sense of the word and connect as best it can to the regions that facilitate language, the brain-computer interface technology analyzes those moments leading up to speech, looking for patterns that may indicate what a person is doing to contribute to the prevalence of stuttering.
“We are interested in helping people who stutter, but we have a broader vision in terms of how we can apply this to other disorders,” such as traumatic brain injuries and Alzheimer’s, Golob said. “We are studying how to get the most out of the brain that you have.”