Gödel machine: AI that develops itself

Imagine a computer program that can independently modify its own code without human intervention to become even better and smarter! This futuristic-sounding concept is called the “Gödel machine.”

Jürgen Schmidhuber, a renowned figure in AI research, proposed the idea of self-improving AI more than two decades ago and called it the “Gödel machine.” According to the original idea, the Gödel machine rewrites its own code when it can mathematically prove that a given self-correction leads to improved performance. However, such mathematical proofs are extremely difficult, so the Gödel machine has remained a theoretical concept until now.

In May, however, a research article that could be a significant step toward the realization of the Gödel machine caused a stir on social media. The study, titled “Darwin Godel Machine: Open-Ended Evolution of Self-Improving Agents” was authored by researchers at the University of British Columbia in Canada and Sakana AI.

However, the newly presented “Darwin Gödel Machine (DGM)” elegantly circumvents the difficulties of mathematical proof. The DGM uses evolutionary algorithms and empirical (experiential) evaluation methods. This means that multiple self-correcting AI systems compete with each other in various tasks (benchmarks). Continuous competition and evaluation encourage the self-modification and continuous development of AIs.

The research team applied the DGM approach to “coding agents” that automatically generate program code. They allowed these agents to modify their own Python code, for example by adding new tools or suggesting different workflows. The modified agents were then evaluated in coding tests. Interestingly, even the worst-performing agents were archived if their behavior was unique, ensuring evolutionary diversity. This idea helps prevent agents from “getting stuck” in a local optimum and encourages the discovery of innovative solutions.

Thanks to this “evolution,” the performance of the coding agents improved significantly. They achieved a 20-50% increase on the SWE-bench benchmark for solving real-world GitHub problems and a 14.2-30.7% increase on the Polyglot benchmark for measuring multilingual coding.

Of course, there are security concerns associated with such self-improving AI research. Many fear that AI evolution will slip out of human control or that AI will “cheat” during testing. The research team responds to these concerns by enabling AI self-improvement under human supervision in a “sandbox” environment.

The second most mentioned research article in May came from NVIDIA and explores the mystery of AI logical thinking. The study, titled “ProRL: Prolonged Reinforcement Learning Expands Reasoning Boundaries in Large Language Models” examines how the latest AI models, such as OpenAI o1 and DeepSeek-R1, achieve their exceptional logical reasoning abilities.

There is a lively debate among AI researchers about the extent to which reinforcement learning influences the reasoning abilities of foundation models. The crux of the debate is whether reinforcement learning merely unlocks existing reasoning abilities in foundation models or endows them with entirely new reasoning abilities. The latest research tends to support the former view.

However, NVIDIA's research challenges this trend. Using their reinforcement learning method called “ProRL,” which enables long-term, stable learning, they demonstrated that the model was able to “discover” new reasoning strategies and find solutions to tasks that the original foundation model could not answer correctly. This suggests that reinforcement learning can indeed endow base models with new reasoning abilities.

These research breakthroughs show that the development of artificial intelligence is progressing at an astonishing rate. Self-improving AIs, such as the Darwin Gödel machine, could revolutionize software development and many other fields. At the same time, it is crucial that we address the ethical and safety issues involved in a responsible and thoughtful manner, ensuring that the development of AI serves the good of humanity. 

Share this post
Where is Artificial Intelligence Really Today?
The development of artificial intelligence has produced spectacular and often impressive results in recent years. Systems like ChatGPT can generate natural-sounding language, solve problems, and in many tasks, even surpass human performance. However, a growing number of prominent researchers and technology leaders — including John Carmack and François Chollet — caution that these achievements don’t necessarily indicate that artificial general intelligence (AGI) is just around the corner. Behind the impressive performances, new types of challenges and limitations are emerging that go far beyond raw capability.
Rhino Linux Releases New Version: 2025.3
In the world of Linux distributions, two main approaches dominate: on one side, stable systems that are updated infrequently but offer predictability and security; on the other, rolling-release distributions that provide the latest software at the cost of occasional instability. Rhino Linux aims to bridge this divide by combining the up-to-dateness of rolling releases with the stability offered by Ubuntu as its base.
SEAL: The Harbinger of Self-Taught Artificial Intelligence
For years, the dominant belief was that human instruction—through data, labels, fine-tuning, and carefully designed interventions—was the key to advancing artificial intelligence. Today, however, a new paradigm is taking shape. In a recent breakthrough, researchers at MIT introduced SEAL (Self-Adapting Language Models), a system that allows language models to teach themselves. This is not only a technological milestone—it also raises a fundamental question: what role will humans play in the training of intelligent systems in the future?
All it takes is a photo and a voice recording – Alibaba's new artificial intelligence creates a full-body avatar from them
A single voice recording and a photo are enough to create lifelike, full-body virtual characters with facial expressions and emotions – without a studio, actor, or green screen. Alibaba's latest development, an open-source artificial intelligence model called OmniAvatar, promises to do just that. Although the technology is still evolving, it is already worth paying attention to what it enables – and what new questions it raises.
ALT Linux 11.0 Education is the foundation of Russian educational institutions
ALT Linux is a Russian-based Linux distribution built on the RPM package manager, based on the Sisyphus repository. It initially grew out of Russian localization efforts, collaborating with international distributions such as Mandrake and SUSE Linux, with a particular focus on supporting the Cyrillic alphabet.
Spatial intelligence is the next hurdle for AGI to overcome
With the advent of LLM, machines have gained impressive capabilities. What's more, their pace of development has accelerated, with new models appearing every day that make machines even more efficient and give them even better capabilities. However, upon closer inspection, this technology has only just enabled machines to think in one dimension. The world we live in, however, is three-dimensional based on human perception. It is not difficult for a human to determine that something is under or behind a chair, or where a ball flying towards us will land. According to many artificial intelligence researchers, in order for AGI, or artificial general intelligence, to be born, machines must be able to think in three dimensions, and for this, spatial intelligence must be developed.

Linux distribution updates released in the last few days