Mapping the speech centre of the brain brings us closer to AMI

    Mapping the function of the brain's speech center is a crucial area of neuroscience. One key reason is that millions of people each year suffer from brain lesions that impair their ability to communicate. However, progress in mapping the speech center has been slow in recent years because recording brain waves is complicated by the fact that mouth movements distort the signals. The most effective method for filtering out this noise so far has been the surgical implantation of electrodes in the brain. However, this approach is highly invasive, making the tests extremely limited and significantly increasing costs.

Meta (formerly Facebook) is making extraordinary efforts in artificial intelligence research, as competition in this field among major companies grows increasingly intense. One of the key initiatives in this effort is the Fundamental Artificial Intelligence Research (FAIR) labs, which Meta established to develop advanced machine intelligence (AMI). Their goal is to create artificial intelligence that perceives and thinks similarly to humans. This research has brought together the expertise of FAIR's Paris laboratory and the Basque Center on Cognition, Brain and Language in Spain.

In the past, advancements in brain research have primarily focused on non-invasive techniques, such as using EEG to record brain signals as they pass through the skull and converting them into images or text. However, this technique has been highly inaccurate, as the captured signals are weak and affected by numerous distortions. Previous decoding efforts achieved an accuracy rate of only about 40%. Thanks to the artificial intelligence techniques developed by FAIR, this accuracy has now increased to 80%. This breakthrough has even enabled the successful reconstruction of complete sentences during research.

Despite this progress, there is still significant room for improvement. The current method only achieves this level of accuracy in controlled conditions—specifically, in magnetically shielded room with test subjects required to remain completely still. Nevertheless, these advancements have been sufficient to map how the brain produces speech. Researchers recorded 1,000 snapshots of brain activity per second while participants spoke and then analyzed the data using artificial intelligence software. This software accurately identified the moments when the brain transformed thoughts into words, syllables, and even letters.

Their findings revealed that the brain creates a series of representations, beginning at an abstract level—such as the meaning of a sentence—before gradually translating them into actions, like instructing fingers to press keys on a keyboard. These representations are linked together by neural mechanisms, effectively forming a structure similar to a linked list in programming. The research suggests that the brain uses a dynamic neural code to accomplish this process. However, fully deciphering this neural code remains an ongoing challenge.

Meta researchers emphasize that language is the ability that has enabled our species to develop skills such as reasoning, learning, and accumulating knowledge. Therefore, understanding the neural and computational processes underlying language is a critical step toward achieving AMI.   

Share this post
Artificial Intelligence in Network Management and Maintenance
Ericsson recently presented its strategic plans for 2025 at the Mobile World Congress 2025 (MWC25). These ideas are particularly intriguing as they demonstrate how artificial intelligence is being integrated into industrial processes that impact our daily lives—yet remain unnoticed as long as they function smoothly.
GTC 2025: NVIDIA's Blackwell-Based Servers and DGX Station
The GTC (GPU Technology Conference), held annually since 2009, will be hosted by NVIDIA this year from March 17 to 21. The conference is designed to showcase the latest developments and to promote collaboration and further innovation across different industries. It is attended mainly by developers, researchers, and technology leaders. NVIDIA CEO Jensen Huang has been saying for some time that companies will become token factories in the future—meaning that every workflow will be supported by artificial intelligence. Currently, large servers play a major role in this process, but AI integration will increasingly extend to personal computers. In the future, computers and laptops will have hardware capable of running even large language models in the background. This is necessary because programmers, engineers, and almost everyone will work with AI assistance.
Fedora 42 Beta Available
Fedora 42 beta is now available for testing, with a stable release planned for 15 April. The new version includes several major enhancements designed to improve the user experience, simplify the installation process, and integrate modern desktop environments and technical solutions.
Video Games in Artificial Intelligence Testing
For decades, video games have served as laboratories for testing the capabilities of various AI algorithms. Whether they are classic platformers or more complex strategy games, these games provide a way for AI systems to learn how to act, adapt to changing environments, and optimize their decisions in order to earn rewards.