Skip to content
Menu
Menu

AI can now read your mind 

A new study has achieved dramatic results from exploring how large language models can more accurately generate text from recordings of brain activity
  • The research could revolutionise how millions of people with speech impairments and other disabilities can communicate

ARTICLE BY

PUBLISHED

ARTICLE BY

PUBLISHED

UPDATED: 06 Mar 2025, 7:57 am

The generation of text from mere thought may sound like science fiction, but a new study has laid the groundwork for producing natural language directly from our brain waves, reports News Medical.

The new study, published in the journal Communications Biology, examines how brain recordings can be integrated with large language models (LLMs) to enhance natural language generation. 

Previous research into translating brain activity to text relied on models that matched brain activity to a set of predefined language options, a fairly successful but rigid approach that failed to capture the full scope of human expression. By employing an LLM, the researchers aimed to produce accurate, open-ended language reconstruction from brain activity – crucial factors if such technology is to be used outside a research environment.

While the method is not currently practical due to the cost and complexity of equipment and varied accuracy, this research still lays the groundwork for future real-world applications, including allowing individuals with speech and other disabilities to communicate quickly and seamlessly with the world around them.  

[See more: Scientists took years to solve a problem that AI cracked in two days]

For the study, researchers developed a new system that integrated brain recordings with an LLM to generate natural language. The model was trained on three public datasets containing functional magnetic resonance imaging (FMRI) recordings of participants exposed to various linguistic stimuli. 

Researchers designed a neural network – a method for teaching AI how to process data – that translated the brain activity into a format legible to the LLM. This “brain adapter” extracted features from brain signals and combined them with traditional text-based inputs, allowing the LLM to generate words that aligned closely with linguistic information encoded in brain activity.

Then, FMRI data generated by study participants while they listened to or read language stimuli was converted into a mathematical representation of brain activity and processed through the neural network. The system processed these combined inputs and generated word sequences based on brain activity and prior text prompts. With thousands of brain scans and matching linguistic inputs in the training data, researchers were able to generate more accurate text.

The study was able to demonstrate that larger datasets generated the highest accuracy, indicating that results could likely be improved by increasing the training data, and that the new system was significantly better at generating language closely aligned with brain activity than more traditional classification-based models. It produced more coherent, contextually appropriate text and managed to capture meaningful linguistic patterns.

UPDATED: 06 Mar 2025, 7:57 am

Send this to a friend