While it might feel as though we do it without thinking, getting words from our brain and out of our mouths in an intelligible way is actually an incredibly complex process – and scientists just made a new discovery about a key part of it.
Our brains are always adjusting what we're saying based on what we're hearing, such as when we raise our voices in a loud environment. When problems with this feedback system happen, it can lead to disorders including stuttering, autism, Parkinson's disease, and schizophrenia, among others.
New research has identified the part of the brain that makes sure our words are being properly articulated: the dorsal precentral gyrus. This knowledge could help treat speech problems and neural disorders in the future, the researchers say.
"Our study confirms for the first time the critical role of the dorsal precentral gyrus in maintaining control over speech as we are talking and to ensure that we are pronouncing our words as we want to," says neuroscientist Adeen Flinker from New York University.
While it was already known that the section of the brain called the cerebral cortex was responsible for controlling the movement of the mouth, lips, and tongue to form words, the details of how this worked hadn't yet been fully established.
In the new study, researchers enlisted the help of 15 people with epilepsy who were already scheduled to have surgery to look at the cause of their seizures. This surgery involved fitting 200 electrodes to their brain, making them useful subjects to study.
During planned breaks in the surgery, the patients were asked to say aloud words and short sentences. While they read, the participants could hear what they were saying through headphones.
During the experiment, the playback was delayed by 50 milliseconds, 100 milliseconds, 200 milliseconds, or not at all. This technique, Delayed Auditory Feedback (DAF), has been used for decades to mimic slurred speech as a way of analyzing how the brain adapts.
Thousands of recordings were made in total, enabling the researchers to spot differences in neural activity as the delay increased and the study volunteers compensated, slowing down their speech rhythms to match, as you might do with an echo on a video call.
"Human speech production is strongly influenced by the auditory feedback it generates," write the researchers in their published paper. "When we speak, we continuously monitor our vocal output and adjust our vocalization to maintain fluency."
Normally, the brain can't be readily accessed while people are conscious and talking, which is why there are gaps in our knowledge about which part of the brain handles which part of speech control.
The study showed that the superior temporal gyrus and the supramarginal gyrus were involved in correcting errors in speech – both brain regions have been implicated in aphasia, the inability to understand or produce language.
However, the dorsal precentral gyrus was dominant in terms of activity when the delays were introduced, suggesting this part of the brain is behind our vocal self-monitoring.
Now further research is planned into these feedback mechanisms. One potential avenue for study is whether or not the dorsal precentral gyrus is responsible for knowing how spoken words are supposed to sound, and how the pronunciation might differ.
"Now that we believe we know the precise role of the dorsal precentral gyrus in controlling for errors in speech, it may be possible to focus treatments on this region of the brain for such conditions as stuttering and Parkinson's disease, which both involve problems with delayed speech processing in the brain," says Flinker.
The research has been published in PLOS Biology.