Safe Superintelligence to raise $20 billion in capital

    The startup Safe Superintelligence was founded by Ilya Sutskever, a co-founder and former chief scientist at OpenAI, Daniel Levy, a former OpenAI researcher, and Daniel Gross, the former head of artificial intelligence projects at Apple. Little is known about the company's activities to date, with a brief statement on their website stating that the company was established with a single purpose: to create safe superintelligence.

The company has been operating since June of last year, has not yet generated any revenue, but has already raised $1 billion in investment. According to the text on their website, generating revenue is not even one of their short-term goals. For now, they are solely focused on research, with two locations in Palo Alto and Tel Aviv.

Ilya Sutskever is a widely recognized artificial intelligence researcher who has won awards primarily in the field of deep learning. Born in the Soviet Union, he immigrated to Israel and later to Canada. As a researcher at Google, he developed the sequence-to-sequence model, which has become a cornerstone of machine translation. At OpenAI, he helped create the GPT (Generative Pre-trained Transformer) model.

He left OpenAI due to a deteriorating relationship with Sam Altman. The disagreement stemmed from Sutskever’s belief that Altman was pushing progress too far, too fast. He played a role in Altman's brief ousting but later publicly apologized and resigned from his position on the board. Sutskever remained at OpenAI as a researcher for a while longer, but his relationship with Altman remained strained. His departure may have been due to ongoing disagreements, and his new company, Safe Superintelligence, may reflect his ambition to create safe superintelligence free from commercial pressures—an ideal that was a core principle of OpenAI when it was founded in 2015.   

Share this post
Phase Transition Observed in Language Model Learning
What happens inside the "mind" of artificial intelligence when it learns to understand language? How does it move from simply following the order of words to grasping their meaning? A recently published study offers a theoretical perspective on these internal processes and identifies a transformation that resembles a physical phase transition.
How AI is Helping to Reduce Carbon Emissions in the Cement Industry
One industry alone is responsible for around eight percent of global carbon emissions: cement production. That’s more than the entire aviation sector emits worldwide. As the world increasingly relies on concrete for housing, infrastructure, and industrial facilities, cement manufacturing remains highly energy-intensive and a major source of pollution. A research team at the Paul Scherrer Institute (PSI) in Switzerland is aiming to change this—by using artificial intelligence to develop new, more environmentally friendly cement formulas.
Where is Artificial Intelligence Really Today?
The development of artificial intelligence has produced spectacular and often impressive results in recent years. Systems like ChatGPT can generate natural-sounding language, solve problems, and in many tasks, even surpass human performance. However, a growing number of prominent researchers and technology leaders — including John Carmack and François Chollet — caution that these achievements don’t necessarily indicate that artificial general intelligence (AGI) is just around the corner. Behind the impressive performances, new types of challenges and limitations are emerging that go far beyond raw capability.
SEAL: The Harbinger of Self-Taught Artificial Intelligence
For years, the dominant belief was that human instruction—through data, labels, fine-tuning, and carefully designed interventions—was the key to advancing artificial intelligence. Today, however, a new paradigm is taking shape. In a recent breakthrough, researchers at MIT introduced SEAL (Self-Adapting Language Models), a system that allows language models to teach themselves. This is not only a technological milestone—it also raises a fundamental question: what role will humans play in the training of intelligent systems in the future?
All it takes is a photo and a voice recording – Alibaba's new artificial intelligence creates a full-body avatar from them
A single voice recording and a photo are enough to create lifelike, full-body virtual characters with facial expressions and emotions – without a studio, actor, or green screen. Alibaba's latest development, an open-source artificial intelligence model called OmniAvatar, promises to do just that. Although the technology is still evolving, it is already worth paying attention to what it enables – and what new questions it raises.
Spatial intelligence is the next hurdle for AGI to overcome
With the advent of LLM, machines have gained impressive capabilities. What's more, their pace of development has accelerated, with new models appearing every day that make machines even more efficient and give them even better capabilities. However, upon closer inspection, this technology has only just enabled machines to think in one dimension. The world we live in, however, is three-dimensional based on human perception. It is not difficult for a human to determine that something is under or behind a chair, or where a ball flying towards us will land. According to many artificial intelligence researchers, in order for AGI, or artificial general intelligence, to be born, machines must be able to think in three dimensions, and for this, spatial intelligence must be developed.

Linux distribution updates released in the last few days