Mapping the speech centre of the brain brings us closer to AMI

    Mapping the function of the brain's speech center is a crucial area of neuroscience. One key reason is that millions of people each year suffer from brain lesions that impair their ability to communicate. However, progress in mapping the speech center has been slow in recent years because recording brain waves is complicated by the fact that mouth movements distort the signals. The most effective method for filtering out this noise so far has been the surgical implantation of electrodes in the brain. However, this approach is highly invasive, making the tests extremely limited and significantly increasing costs.

Meta (formerly Facebook) is making extraordinary efforts in artificial intelligence research, as competition in this field among major companies grows increasingly intense. One of the key initiatives in this effort is the Fundamental Artificial Intelligence Research (FAIR) labs, which Meta established to develop advanced machine intelligence (AMI). Their goal is to create artificial intelligence that perceives and thinks similarly to humans. This research has brought together the expertise of FAIR's Paris laboratory and the Basque Center on Cognition, Brain and Language in Spain.

In the past, advancements in brain research have primarily focused on non-invasive techniques, such as using EEG to record brain signals as they pass through the skull and converting them into images or text. However, this technique has been highly inaccurate, as the captured signals are weak and affected by numerous distortions. Previous decoding efforts achieved an accuracy rate of only about 40%. Thanks to the artificial intelligence techniques developed by FAIR, this accuracy has now increased to 80%. This breakthrough has even enabled the successful reconstruction of complete sentences during research.

Despite this progress, there is still significant room for improvement. The current method only achieves this level of accuracy in controlled conditions—specifically, in magnetically shielded room with test subjects required to remain completely still. Nevertheless, these advancements have been sufficient to map how the brain produces speech. Researchers recorded 1,000 snapshots of brain activity per second while participants spoke and then analyzed the data using artificial intelligence software. This software accurately identified the moments when the brain transformed thoughts into words, syllables, and even letters.

Their findings revealed that the brain creates a series of representations, beginning at an abstract level—such as the meaning of a sentence—before gradually translating them into actions, like instructing fingers to press keys on a keyboard. These representations are linked together by neural mechanisms, effectively forming a structure similar to a linked list in programming. The research suggests that the brain uses a dynamic neural code to accomplish this process. However, fully deciphering this neural code remains an ongoing challenge.

Meta researchers emphasize that language is the ability that has enabled our species to develop skills such as reasoning, learning, and accumulating knowledge. Therefore, understanding the neural and computational processes underlying language is a critical step toward achieving AMI.   

Share this post
Will ASICs replace NVIDIA GPUs?
The development of artificial intelligence over the past decade has been closely linked to the name NVIDIA, which has become the dominant player in the market with its graphics processing units (GPUs). A significant portion of today's AI models are built on these GPUs, and NVIDIA's decade-old software ecosystem—especially the CUDA platform—has become an indispensable tool for research, development, and industrial applications. At the same time, in recent years, the biggest players in the technology sector – including Google, Amazon, Meta, and Microsoft – have been turning with increasing momentum toward AI chips developed in-house and optimized for specific tasks, known as ASICs.
Google Gemini CLI, a powerful offering in the field of AI accessible from the terminal
Google's recently announced Gemini CLI is an open source, command line AI tool that integrates the Gemini 2.5 Pro large language model directly into the terminal. The goal of the initiative is nothing less than to transform natural language commands into real technical workflows, in an environment that has already been synonymous with efficiency for many.
Satya Nadella's thoughts on the role, future, and responsibility of artificial intelligence
Rapid change is not uncommon in the world of technology, but rarely does it affect so many sectors at once as today's artificial intelligence (AI) revolution. In an interview with Y Combinator, Satya Nadella, CEO of Microsoft, not only assessed technological developments, but also placed the development of AI in a broader social and economic context. His approach is restrained, calm, and purposeful: AI is not a mystical entity, but a tool that must be properly applied and interpreted.
What does RefreshOS 2.5 offer Linux users?
The world of Linux distributions is rich but often divisive: on one side are complex, purist systems, and on the other are solutions that try to satisfy every need but are often overloaded. RefreshOS aims to bridge the gap between the two. The latest 2.5 release of the system developed by eXybit Technologies™ (formerly eGoTech™) is the latest step in this endeavor, building on the stable foundations of Debian to provide a simple yet modern user experience.
The future of AI and the price of transparency – What do the OpenAI files say?
There has been growing interest in OpenAI's operations in recent times. This is no coincidence: the artificial intelligence models they have developed – such as ChatGPT – are widely used, while we have only fragmentary information about the decision-making and ownership structures behind them. Some light is shed on this obscurity by a report called OpenAI Files, prepared by two technology oversight organizations, the Midas Project and the Tech Oversight Project. The document not only discusses the company's internal operations, but also touches on broader social issues: what mechanisms are needed when a private company holds the key to the economy of the future?
The dawn of artificial intelligence
Sam Altman, CEO of OpenAI, recently gave an in-depth insight into the future of artificial intelligence (AI), the challenges of founding OpenAI, and the explosive growth he envisions. His reflections not only push the boundaries of our technological vision, but also show how our work, our daily lives, and our society could fundamentally change.

Linux distribution updates released in the last few days