Safe Superintelligence to raise $20 billion in capital

    The startup Safe Superintelligence was founded by Ilya Sutskever, a co-founder and former chief scientist at OpenAI, Daniel Levy, a former OpenAI researcher, and Daniel Gross, the former head of artificial intelligence projects at Apple. Little is known about the company's activities to date, with a brief statement on their website stating that the company was established with a single purpose: to create safe superintelligence.

The company has been operating since June of last year, has not yet generated any revenue, but has already raised $1 billion in investment. According to the text on their website, generating revenue is not even one of their short-term goals. For now, they are solely focused on research, with two locations in Palo Alto and Tel Aviv.

Ilya Sutskever is a widely recognized artificial intelligence researcher who has won awards primarily in the field of deep learning. Born in the Soviet Union, he immigrated to Israel and later to Canada. As a researcher at Google, he developed the sequence-to-sequence model, which has become a cornerstone of machine translation. At OpenAI, he helped create the GPT (Generative Pre-trained Transformer) model.

He left OpenAI due to a deteriorating relationship with Sam Altman. The disagreement stemmed from Sutskever’s belief that Altman was pushing progress too far, too fast. He played a role in Altman's brief ousting but later publicly apologized and resigned from his position on the board. Sutskever remained at OpenAI as a researcher for a while longer, but his relationship with Altman remained strained. His departure may have been due to ongoing disagreements, and his new company, Safe Superintelligence, may reflect his ambition to create safe superintelligence free from commercial pressures—an ideal that was a core principle of OpenAI when it was founded in 2015.   

Share this post
Artificial intelligence, space, and humanity
Elon Musk, founder and CEO of SpaceX, Tesla, Neuralink, and xAI, shared his thoughts on the possible directions of the future in a recent interview, with a particular focus on artificial intelligence, space exploration, and the evolution of humanity.
Real-time music composition with Google Magenta RT
The use of artificial intelligence in music composition is not a new endeavor, but real-time operation has long faced significant obstacles. The Google Magenta team has now unveiled a development that could expand both the technical and creative possibilities of the genre. The new model, called Magenta RealTime (Magenta RT for short), generates music in real time and is accessible to anyone thanks to its open source code.
What would the acquisition of Perplexity AI mean for Apple?
Apple has long been trying to find its place in the rapidly evolving market of generative artificial intelligence. The company waited strategically for decades before directing significant resources into artificial intelligence-based developments. Now, however, according to the latest news, the Cupertino-based company may be preparing to take a bigger step than ever before: internal discussions have begun on the possible acquisition of a startup called Perplexity AI.
The new AI chip that is revolutionizing medicine and telecommunications makes decisions in nanoseconds
As more and more devices connect to the internet and demand grows for instant, high-bandwidth applications such as cloud-based gaming, video calls, and smart homes, the efficient operation of wireless networks is becoming an increasingly serious challenge. The problem is further exacerbated by the fact that the wireless spectrum—the available frequency band—is limited. In their search for a solution, engineers are increasingly turning to artificial intelligence, but current systems are often slow and energy-intensive. A new development that brings data transmission and processing up to the speed of light could change this situation.
This is how LLM distorts
With the development of artificial intelligence (AI), more and more attention is being paid to so-called large language models (LLMs), which are now present not only in scientific research but also in many areas of everyday life—for example, in legal work, health data analysis, and computer program coding. However, understanding how these models work remains a serious challenge, especially when they make seemingly inexplicable mistakes or give misleading answers.
MiniMax-M1 AI model, targeting the handling of large texts
With the development of artificial intelligence systems, there is a growing demand for models that are not only capable of interpreting language, but also of carrying out complex, multi-step thought processes. Such models can be crucial not only in theoretical tasks, but also in software development or real-time decision-making, for example. However, these applications are particularly sensitive to computational costs, which are often difficult to control using traditional approaches.

Linux distribution updates released in the last few days