News Report Technology

Texas Experts Unveil a Novel Technique for Reconstructing Text Using Brain MRI Signals and AI

In Brief

The University of Texas has introduced a new approach that allows for the reconstruction of spoken text based on brain MRI signals.

This method trains an encoder network designed to recreate the MRI traces of the brain that correspond to specific text segments while employing pre-trained language models to generate potential continuations.

Statistically generated texts These generated continuations bear more resemblance to the actual text than arbitrary strings and can be instrumental in exploring the functions of different brain regions.

Texas researchers have put forth a novel method for reconstructing text from the signals collected by MRI of the brain. This decoding yields text that is semantically akin to the original spoken content.

@Midjourney

Previous attempts to decode what individuals hear or internally vocalize have been made, with two distinct methodologies based on how brain signals are extracted. The first, an invasive method, involves implanting a chip within the skull to obtain neuron signals, but it comes with high costs and complexities. The second option utilizes non-invasive techniques, such as MRI and M/EEG, which are less intrusive and economical. reads impulses directly from brain Nonetheless, non-invasive methods for acquiring brain signals present a significant challenge: MRI readings are influenced by stimuli for about 10 seconds following exposure (for example, when someone hears a word). In English, a native speaker typically produces around two words every second. Consequently, an MRI scan could capture data regarding the brain's processing of approximately twenty words during a session with English speakers.

Due to these limitations, accurately reconstructing the text that a person hears using MRI technology remains a challenge. Furthermore, many prior studies that tackled text recovery using non-invasive methods were only able to extract isolated words or phrases.

The Texas team has developed a cutting-edge MRI technique for reconstructing nearly coherent text. Though there may be some deviations between the recreated text and the actual words heard, the output will still hold semantic integrity, reflecting the commonly accepted interpretation.

In order to retrieve the MRI data related to a specific text passage, the researchers utilize an encoder network that learns from provided text. They then deploy a pre-trained language model (like GPT) to execute the following steps:

to create a spectrum of text continuation options every two seconds. The encoder network sifts through these choices to attempt to match them with the current MRI image. The prevailing assumption is that the text version offering the closest alignment with the authentic MRI signal is the most accurate.

  • Researchers ask GPT I was caught in a whirlwind of emotions, unsure whether to scream, weep, or bolt. Instead, I managed to utter, 'Please leave me be; I don’t want your assistance.' Adam faded away, leaving me to tidy up, my tears falling freely.

Here is an example:

Original InputGeneration Output
As I let out cries and sobs, she simply stated, 'I asked you to leave me alone; you can't hurt me any longer. I’m sorry,' and then he stormed off. Although I initially thought he had departed, the tears began to flow uncontrollably.This technology opens up numerous possibilities, especially in generating speech rather than just replaying recorded phrases. The authors of this study even explored creating fictional speech. Once again, the reconstructed dialogues were found to be closely aligned with the original samples compared to arbitrary text. This method truly appears to hold promise.

With these advanced models, researchers can delve into the workings of different brain areas. The study utilized three distinct regions responsible for processing spoken auditory signals to generate the MRI outputs. By manipulating the signals from these areas, one can gain insights into how different parts of the brain contribute to information processing. Additionally, comparisons can be drawn between reconstructions made with inputs from various other regions.

ChatGPT Model Anticipates a 500% Return on Stock Market Investments Over 20 Years

Read more about AI:

Disclaimer

In line with the Trust Project guidelines Damir leads the team as product manager and editor at Metaverse Post, focusing on areas such as AI/ML, AGI, LLMs, Metaverse, and Web3 sectors. His writing garners a staggering readership of more than a million users monthly. With a decade of expertise in SEO and digital marketing, Damir is cited in prominent publications, including Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, among others. He travels across the UAE, Turkey, Russia, and the CIS as a digital nomad. A holder of a bachelor’s degree in physics, he credits his education with providing him the critical thinking skills essential for navigating the ever-evolving online landscape.

Let’s examine projects harnessing the power of digital currencies for charitable endeavors.

AlphaFold 3, Med-Gemini, and Others: The Role of AI in Transforming Healthcare in 2024

Know More

AI influences healthcare in diverse ways, spanning from discovering new genetic connections to enhancing robotic surgical systems.

Copyright, Permissions, and Linking Policy

Know More
Read More
Read more
News Report Technology
Space and Time Launches Initiative Aimed at Promoting Adoption of ZK-Proven Data for Blockchain Applications
News Report Technology
Animoca Brands Opens Its First Office in the Middle East; Appoints Omar Elassar as Managing Director
News Report Technology
Hyperliquid Revises Its Fee Structure and Staking Tiers to Improve Trading Flexibility
News Report Technology
From Ripple to The Big Green DAO: The Impact of Cryptocurrency Initiatives on Charitable Work