Technology

Implants and AI Convert Brain impulses into Speech

Implants and AI Convert Brain impulses into Speech

Transforming brain data into speech via implants and artificial intelligence is an exciting and quickly expanding subject of research and technology. Neuroprosthetics, brain-computer interfaces (BCIs), and advanced artificial intelligence (AI) algorithms are used in this procedure.

Radboud University and UMC Utrecht researchers have succeeded in converting brain impulses into audible speech. They were able to predict the words people wanted to utter with an accuracy of 92 to 100% by decoding signals from the brain using a mix of implants and AI. This month, their findings were published in the Journal of Neural Engineering.

The research indicates a promising development in the field of Brain-Computer Interfaces, according to lead author Julia Berezutskaya, a researcher at Radboud University’s Donders Institute for Brain, Cognition and Behaviour and UMC Utrecht. Berezutskaya and colleagues at the UMC Utrecht and Radboud University used brain implants in patients with epilepsy to infer what people were saying.

Ultimately, we intend to make this technology available to patients who are paralyzed and unable to communicate in a locked-in state. These folks lose the ability to move their muscles and, as a result, speak. We can analyze brain activity and give them a voice again by constructing a brain-computer interface.

Julia Berezutskaya

Bringing back voices

‘Ultimately, we intend to make this technology available to patients who are paralyzed and unable to communicate in a locked-in state,’ adds Berezutskaya. ‘These folks lose the ability to move their muscles and, as a result, speak. We can analyze brain activity and give them a voice again by constructing a brain-computer interface.’

The researchers invited non-paralyzed persons with temporary brain implants to utter a number of words out loud while their brain activity was measured for the experiment in their new study.

Brain signals transformed into speech through implants and AI

Berezutskaya: ‘We were then able to establish a direct mapping between brain activity on the one hand, and speech on the other hand. We also used advanced artificial intelligence models to translate that brain activity directly into audible speech. That means we weren’t just able to guess what people were saying, but we could immediately transform those words into intelligible, understandable sounds. In addition, the reconstructed speech even sounded like the original speaker in their tone of voice and manner of speaking.’

Researchers all across the world are trying to figure out how to distinguish words and sentences in brain patterns. The researchers were able to reconstruct understandable speech from tiny samples, demonstrating that their models can uncover the complicated mapping between brain activity and speech even with insufficient data. They also conducted listening tests with participants to see how distinguishable the synthesized words were. The positive results of those tests indicate that the technology not only properly identifies words, but also conveys those words audibly and understandably, just like a human voice.

Limitations

‘For the time being, there are some limitations,’ adds Berezutskaya. ‘In these experiments, we asked individuals to say twelve words aloud, and those were the terms we looked for. guessing individual words is generally easier than guessing complete phrases. Large language models utilized in AI research may be useful in the future.

Our goal is to predict entire sentences and paragraphs of what people are attempting to say based just on brain activity. More studies, more powerful implants, greater datasets, and advanced AI models will be required to get there. All of these processes will take time, but they appear to be moving in the right way.’