Mapping the function of the brain's speech center is a crucial area of neuroscience. One key reason is that millions of people each year suffer from brain lesions that impair their ability to communicate. However, progress in mapping the speech center has been slow in recent years because recording brain waves is complicated by the fact that mouth movements distort the signals. The most effective method for filtering out this noise so far has been the surgical implantation of electrodes in the brain. However, this approach is highly invasive, making the tests extremely limited and significantly increasing costs.
Meta (formerly Facebook) is making extraordinary efforts in artificial intelligence research, as competition in this field among major companies grows increasingly intense. One of the key initiatives in this effort is the Fundamental Artificial Intelligence Research (FAIR) labs, which Meta established to develop advanced machine intelligence (AMI). Their goal is to create artificial intelligence that perceives and thinks similarly to humans. This research has brought together the expertise of FAIR's Paris laboratory and the Basque Center on Cognition, Brain and Language in Spain.
In the past, advancements in brain research have primarily focused on non-invasive techniques, such as using EEG to record brain signals as they pass through the skull and converting them into images or text. However, this technique has been highly inaccurate, as the captured signals are weak and affected by numerous distortions. Previous decoding efforts achieved an accuracy rate of only about 40%. Thanks to the artificial intelligence techniques developed by FAIR, this accuracy has now increased to 80%. This breakthrough has even enabled the successful reconstruction of complete sentences during research.
Despite this progress, there is still significant room for improvement. The current method only achieves this level of accuracy in controlled conditions—specifically, in magnetically shielded room with test subjects required to remain completely still. Nevertheless, these advancements have been sufficient to map how the brain produces speech. Researchers recorded 1,000 snapshots of brain activity per second while participants spoke and then analyzed the data using artificial intelligence software. This software accurately identified the moments when the brain transformed thoughts into words, syllables, and even letters.
Their findings revealed that the brain creates a series of representations, beginning at an abstract level—such as the meaning of a sentence—before gradually translating them into actions, like instructing fingers to press keys on a keyboard. These representations are linked together by neural mechanisms, effectively forming a structure similar to a linked list in programming. The research suggests that the brain uses a dynamic neural code to accomplish this process. However, fully deciphering this neural code remains an ongoing challenge.
Meta researchers emphasize that language is the ability that has enabled our species to develop skills such as reasoning, learning, and accumulating knowledge. Therefore, understanding the neural and computational processes underlying language is a critical step toward achieving AMI.