Mapping the speech centre of the brain brings us closer to AMI

    Mapping the function of the brain's speech center is a crucial area of neuroscience. One key reason is that millions of people each year suffer from brain lesions that impair their ability to communicate. However, progress in mapping the speech center has been slow in recent years because recording brain waves is complicated by the fact that mouth movements distort the signals. The most effective method for filtering out this noise so far has been the surgical implantation of electrodes in the brain. However, this approach is highly invasive, making the tests extremely limited and significantly increasing costs.

Meta (formerly Facebook) is making extraordinary efforts in artificial intelligence research, as competition in this field among major companies grows increasingly intense. One of the key initiatives in this effort is the Fundamental Artificial Intelligence Research (FAIR) labs, which Meta established to develop advanced machine intelligence (AMI). Their goal is to create artificial intelligence that perceives and thinks similarly to humans. This research has brought together the expertise of FAIR's Paris laboratory and the Basque Center on Cognition, Brain and Language in Spain.

In the past, advancements in brain research have primarily focused on non-invasive techniques, such as using EEG to record brain signals as they pass through the skull and converting them into images or text. However, this technique has been highly inaccurate, as the captured signals are weak and affected by numerous distortions. Previous decoding efforts achieved an accuracy rate of only about 40%. Thanks to the artificial intelligence techniques developed by FAIR, this accuracy has now increased to 80%. This breakthrough has even enabled the successful reconstruction of complete sentences during research.

Despite this progress, there is still significant room for improvement. The current method only achieves this level of accuracy in controlled conditions—specifically, in magnetically shielded room with test subjects required to remain completely still. Nevertheless, these advancements have been sufficient to map how the brain produces speech. Researchers recorded 1,000 snapshots of brain activity per second while participants spoke and then analyzed the data using artificial intelligence software. This software accurately identified the moments when the brain transformed thoughts into words, syllables, and even letters.

Their findings revealed that the brain creates a series of representations, beginning at an abstract level—such as the meaning of a sentence—before gradually translating them into actions, like instructing fingers to press keys on a keyboard. These representations are linked together by neural mechanisms, effectively forming a structure similar to a linked list in programming. The research suggests that the brain uses a dynamic neural code to accomplish this process. However, fully deciphering this neural code remains an ongoing challenge.

Meta researchers emphasize that language is the ability that has enabled our species to develop skills such as reasoning, learning, and accumulating knowledge. Therefore, understanding the neural and computational processes underlying language is a critical step toward achieving AMI.   

Share this post
Google Introduces the Agent2Agent (A2A) Open Source Protocol
In a recent speech, Jensen Huang (CEO of NVIDIA) divided the evolution of artificial intelligence into several phases and called the current phase the era of Agentic AI. Although he mainly focused on the next phase of the physical AI era, we should not forget that the Agentic AI era also started only this year, so its fully developed form has not yet been seen. The recent announcement by Google of the open source Agent2Agent protocol gives us a hint of what this more advanced form might look like. The protocol is designed to bridge the gap between AI agents created on different platforms, frameworks, and by various vendors, enabling smooth communication and collaboration.
Apple in Trouble with Artificial Intelligence Developments?
With Trump's tariffs, Apple appears to be facing increasing problems. One reason is that, besides the tariffs—which have hit Apple's shares hard—there are internal conflicts, especially in the division responsible for AI integration. Tripp Mickle, a journalist for The New York Times, reports that Apple has not been able to produce any new innovations lately. Although this may not be entirely true—since, after much debate, the company finally managed to launch Apple Intelligence—there is no doubt that it is lagging behind its competitors in the field of artificial intelligence.
New Collaboration Between Netflix and OpenAI
Netflix recently began testing a new artificial intelligence-based search feature that uses OpenAI’s technology to improve content search. This feature is a significant departure from traditional search methods because it allows users to find movies and TV shows using specific terms, such as their mood or preferences, rather than only using titles, genres, or actor names.
Strong Turbulence Around Meta Llama Models
Less than a week after its market debut, Llama 4 has already received harsh criticism from users. As mentioned before, one of Llama 4’s new features is its architecture built from different modules. This design lets the model have a much larger effective parameter set than the one it uses at run time, so in theory, it should perform much better. However, several independent user tests show that it does not meet the expected results, especially for mathematical tasks and coding. Some users claim that Meta heavily manipulated benchmarks to achieve better scores, while others believe that an internal version of the model was tested while a more modest version was released to the public.