- Locked-in syndrome is a neurological disorder where a person cannot speak or show facial expressions.
- Most people with this condition must rely only on eye blinking and movement to communicate with others.
- Researchers from the University of California San Francisco have developed a new way for people with locked-in syndrome to both communicate and show facial expressions through the use of a brain implant and digital avatar.
Locked-in syndrome is a
Although a person with locked-in syndrome can fully understand what someone is saying or reading to them, they are unable to talk or show emotion through their face, such as happiness, anger, or sadness.
Many times, a person with locked-in syndrome depends on small movements, such as blinking, to communicate with others.
Now, researchers from the University of California San Francisco have developed a new way for people with locked-in syndrome to both communicate and show facial expressions through the use of a brain implant and
This study was recently published in the journal
Locked-in syndrome is relatively rare — fewer than 1,000 people in the United States have the condition.
The condition is usually caused by damage to a part of the
The damage to the brainstem typically occurs during a stroke, but it can also happen due to nerve inflammation, tumors, infections, or other conditions such as amyotrophic lateral sclerosis (ALS).
When a person has locked-in syndrome, they lose the ability to move their voluntary muscles at will. However, they do not lose any cognitive ability, so they are able to think normally and understand when a person speaks or reads to them. And their hearing is not affected.
However, locked-in syndrome may impact a person’s ability to breathe and eat by affecting chewing and swallowing.
There is no cure or specific treatments currently available for locked-in syndrome. A doctor will treat the underlying cause of the condition and may prescribe physical and speech therapies.
The most common way for people with locked-in syndrome to communicate is through eye movements and blinking.
Nowadays, there are computer programs and other assistive technologies that can help them communicate with others, such as
And thanks to innovations in computer engineering and new technologies like artificial intelligence (AI), researchers have been presenting new communication options for people with linked-in syndrome.
For example, a study published in
For the current study, researchers developed a new brain-computer technology using a brain implant and digital avatar. The digital avatar allows a person with facial paralysis to convey normal facial expressions and emotions.
The new technology was tested on a 47-year-old woman named Ann, who has locked-in syndrome following a brain stem stroke.
Medical News Today spoke to Dr. David Moses, an assistant professor of neurological surgery, part of the Chang Lab at the University of California San Francisco, and one of the co-lead authors of the study.
According to him, when we speak, complex patterns of neural activity in our
“For Ann and others who have suffered a brainstem stroke, this pathway is damaged, and signals from the speech-motor cortex cannot reach the articulator muscles,” he explained.
With this brain implant, Dr. Moses said, specialists are able to record neural activity from the cortex as Ann tries to speak, and they directly translate those into her intended words, bypassing her paralysis entirely.
He further explained how this works:
“We do this by first creating AI models trained on neural data as she tries to silently say many sentences — she does not actually vocalize as she’s trying to say these; she’s trying her best to ‘mouth’ the words in the sentences. By enabling the AI models to learn the mapping between the brain activity and the intended speech, we can then subsequently use these models in real-time to decode her brain activity into speech. The models use flexible intermediate representations of speech internally, which enables the decoder to output sentences that she didn’t try to say during training.”
Ann received a brain implant with 253 electrodes placed on specific surface areas of the brain critical for speech. A cable connects the brain implant to computers.
For weeks, Ann worked with researchers to train the artificial intelligence algorithms to recognize and respond to her unique brain signals for speech.
Researchers also created a digital avatar of Ann through software that simulates and animates facial muscle movements.
Using machine learning, they were able to mesh the software with signals coming from Ann’s brain and convert them into movements on her avatar’s face, showing both speech and facial expressions.
Additionally, the scientists were able to use footage from a pre-injury video to recreate Ann’s actual voice. This way, when she speaks through the digital avatar, it is her voice and not a default computerized voice.
When asked what the next steps on this new technology would be, Dr. Moses said there are many avenues for improvement.
“For the hardware, a wireless version is needed to improve feasibility as a clinical solution,” he noted.
“In terms of software, we want to integrate our approaches with her existing devices, so that she can use the system to write emails and browse the web. We also want to leverage some advances in generative AI modeling to improve our decoding outputs,” Dr. Moses added.
MNT also spoke about this study with Dr. Amit Kochhar, a double board-certified in otolaryngology, head and neck surgery, and facial plastic and reconstructive surgery, and director of the Facial Nerve Disorders Program at Pacific Neuroscience Institute in Santa Monica, CA, who was not involved in the research.
As a doctor who treats patients with facial paralysis, he said one of the most difficult things for patients is the inability to express their emotions to others.
“Research has shown that if a lay observer looks at someone whose half their face is paralyzed, when they look at that person they can’t discriminate whether the person is conveying a happy emotion versus an anger emotion,” Dr. Kochhar explained. “And so there’s a lot of confusion on the part of the observer and then obviously frustration on the part of the patient.”
“And so if they had access to something like this, […] they might then still be able to communicate with others, like their family members, their friends, using this type of avatar technology so that they could then convey properly the emotions of happiness or surprise or anger without having that confusion,” he added.
Dr. Kochhar said he would like to see this technology used by more people to make sure it is reproducible and ensure the technology is economically feasible.
“If the cost of this device and it’s only available for a small percentage of the population that can afford it, it’s a great step forward but that’s not going to be something that’s going to help a lot of other people,” he added.
Dr. Kochhar also said he would like to see this technology made portable: “The patient had to come to the center for it to be working — she wasn’t able to, at this time, take it with her so that she could be at home. And so those are the next steps of the evolution of this type of software.”