Mapping the speech centre of the brain brings us closer to AMI

    Mapping the function of the brain's speech center is a crucial area of neuroscience. One key reason is that millions of people each year suffer from brain lesions that impair their ability to communicate. However, progress in mapping the speech center has been slow in recent years because recording brain waves is complicated by the fact that mouth movements distort the signals. The most effective method for filtering out this noise so far has been the surgical implantation of electrodes in the brain. However, this approach is highly invasive, making the tests extremely limited and significantly increasing costs.

Meta (formerly Facebook) is making extraordinary efforts in artificial intelligence research, as competition in this field among major companies grows increasingly intense. One of the key initiatives in this effort is the Fundamental Artificial Intelligence Research (FAIR) labs, which Meta established to develop advanced machine intelligence (AMI). Their goal is to create artificial intelligence that perceives and thinks similarly to humans. This research has brought together the expertise of FAIR's Paris laboratory and the Basque Center on Cognition, Brain and Language in Spain.

In the past, advancements in brain research have primarily focused on non-invasive techniques, such as using EEG to record brain signals as they pass through the skull and converting them into images or text. However, this technique has been highly inaccurate, as the captured signals are weak and affected by numerous distortions. Previous decoding efforts achieved an accuracy rate of only about 40%. Thanks to the artificial intelligence techniques developed by FAIR, this accuracy has now increased to 80%. This breakthrough has even enabled the successful reconstruction of complete sentences during research.

Despite this progress, there is still significant room for improvement. The current method only achieves this level of accuracy in controlled conditions—specifically, in magnetically shielded room with test subjects required to remain completely still. Nevertheless, these advancements have been sufficient to map how the brain produces speech. Researchers recorded 1,000 snapshots of brain activity per second while participants spoke and then analyzed the data using artificial intelligence software. This software accurately identified the moments when the brain transformed thoughts into words, syllables, and even letters.

Their findings revealed that the brain creates a series of representations, beginning at an abstract level—such as the meaning of a sentence—before gradually translating them into actions, like instructing fingers to press keys on a keyboard. These representations are linked together by neural mechanisms, effectively forming a structure similar to a linked list in programming. The research suggests that the brain uses a dynamic neural code to accomplish this process. However, fully deciphering this neural code remains an ongoing challenge.

Meta researchers emphasize that language is the ability that has enabled our species to develop skills such as reasoning, learning, and accumulating knowledge. Therefore, understanding the neural and computational processes underlying language is a critical step toward achieving AMI.   

Share this post
All it takes is a photo and a voice recording – Alibaba's new artificial intelligence creates a full-body avatar from them
A single voice recording and a photo are enough to create lifelike, full-body virtual characters with facial expressions and emotions – without a studio, actor, or green screen. Alibaba's latest development, an open-source artificial intelligence model called OmniAvatar, promises to do just that. Although the technology is still evolving, it is already worth paying attention to what it enables – and what new questions it raises.
ALT Linux 11.0 Education is the foundation of Russian educational institutions
ALT Linux is a Russian-based Linux distribution built on the RPM package manager, based on the Sisyphus repository. It initially grew out of Russian localization efforts, collaborating with international distributions such as Mandrake and SUSE Linux, with a particular focus on supporting the Cyrillic alphabet.
Spatial intelligence is the next hurdle for AGI to overcome
With the advent of LLM, machines have gained impressive capabilities. What's more, their pace of development has accelerated, with new models appearing every day that make machines even more efficient and give them even better capabilities. However, upon closer inspection, this technology has only just enabled machines to think in one dimension. The world we live in, however, is three-dimensional based on human perception. It is not difficult for a human to determine that something is under or behind a chair, or where a ball flying towards us will land. According to many artificial intelligence researchers, in order for AGI, or artificial general intelligence, to be born, machines must be able to think in three dimensions, and for this, spatial intelligence must be developed.
What lies behind Meta's artificial intelligence reorganization?
Mark Zuckerberg, CEO of Meta, is not taking a bold step for the first time, but this time he is carrying out a more comprehensive reorganization than ever before in the company's artificial intelligence divisions. All existing AI teams, including research and development, product development, and basic model building, will fall under the newly created division called Meta Superintelligence Labs (MSL). The goal is not only to create artificial intelligence (AGI) that can compete with human thinking, but also to create a system-level superintelligence that surpasses human capabilities.
GNOME 49 will no longer support X11
Although GNOME is perhaps the most commonly used desktop environment for individual Linux distributions, the developers have decided to make deeper structural changes in GNOME 49, which will affect distribution support.
Facebook's new AI feature quietly opens the door to mass analysis of personal photos
Users who want to share a post on Facebook are greeted with a new warning: a pop-up window asking for permission for “cloud-based processing.” If we approve, the system can access our entire phone photo library—including photos we've never uploaded to the social network. The goal: to generate creative ideas using artificial intelligence, such as collages, themed selections, or stylized versions.

Linux distribution updates released in the last few days