The future of AI and the price of transparency – What do the OpenAI files say?

There has been growing interest in OpenAI's operations in recent times. This is no coincidence: the artificial intelligence models they have developed – such as ChatGPT – are widely used, while we have only fragmentary information about the decision-making and ownership structures behind them. Some light is shed on this obscurity by a report called OpenAI Files, prepared by two technology oversight organizations, the Midas Project and the Tech Oversight Project. The document not only discusses the company's internal operations, but also touches on broader social issues: what mechanisms are needed when a private company holds the key to the economy of the future?

One of the central elements of the report is a detailed description of OpenAI's organizational transformation. The company was originally launched as a non-profit laboratory in 2015 with the stated goal of developing AGI, or artificial general intelligence, for the benefit of humanity as a whole. In its early years, its research was open and investor interests were pushed into the background. However, this basic position gradually changed.

In 2019, OpenAI introduced a mixed model: it created a capped-profit subsidiary that allowed investors a maximum return of 100 times their investment. In the following years, this cap was steadily reduced: first to 20 times, then to a “single-digit” return. At the same time, the non-profit organization retained complete control over the use and development of the technology—at least on paper.

However, the current report shows that this system may change radically. According to the plans, OpenAI would not only abolish the profit cap in the future, but also significantly reduce the decision-making role of the non-profit organization. The company plans to introduce a new type of structure, a so-called Public Benefit Corporation, which will formally continue to take public interest into account, but will no longer prioritize it over shareholder profits. Based on experience with this legal form to date, it cannot be proven that it effectively promotes community goals, especially when they conflict with the short-term interests of investors.

Further questions arise from the fact that the public only learned after the fact, from third-party reports, that the profit cap could be raised by 20% annually starting in 2025 – which would render the original restriction completely meaningless in the long run. This rate is nearly seven times the average global economic growth rate, and if it continues, the system that was introduced as a “humanitarian safeguard” will effectively cease to exist within a few decades.

According to OpenAI's official justification, the previous structure was justified when it seemed that a single company – perhaps OpenAI – would dominate the field of AGI. Now that several companies are competing for this goal, they argue that the restriction has become obsolete. At the same time, the company's own previous statements and its 2018 founding charter also suggested that they had expected multiple competitors from the outset. This raises the question of whether the change is really justified by changing circumstances, or whether investor pressure is playing a role.

According to the document, some major investors, such as SoftBank, specifically made the removal of restrictions a condition for further funding. This reinterprets the function of the structure that previously “protected the interests of humanity”: it is now more of an obstacle than a guarantee of operation.

In addition to corporate restructuring, the report also draws attention to internal culture. According to reports from former OpenAI employees, workers who raised concerns about information security or management issues were also dismissed. These cases go beyond individual conflicts: in an industry where knowledge and ethical operation are vital, the ability to express opinions openly is not only a labor law issue, but also a technological security issue.

OpenAI Files does not seek to unequivocally condemn the company's leaders or strategy, but it clearly indicates the need for a broader, institutionalized control mechanism. This can be achieved not only through government regulation, but also through civil society, scientific communities, and the media. The development of AI cannot be confined to laboratories: these are issues that will have a long-term impact on the economy, the labor market, and human relationships.

The development of OpenAI thus points to a broader problem: how can a technology company remain true to its core values in the long term when it is under increasing financial pressure? What structures ensure that rapid development does not come at the expense of social trust?

Answering these questions is not solely the responsibility of engineers, investors, or company executives. The future of AI is a social issue—and society must decide its direction. If we fail to do so now, the technological innovations of the coming decades will serve the interests of the most influential investors rather than the good of the community. 

Share this post
Where is Artificial Intelligence Really Today?
The development of artificial intelligence has produced spectacular and often impressive results in recent years. Systems like ChatGPT can generate natural-sounding language, solve problems, and in many tasks, even surpass human performance. However, a growing number of prominent researchers and technology leaders — including John Carmack and François Chollet — caution that these achievements don’t necessarily indicate that artificial general intelligence (AGI) is just around the corner. Behind the impressive performances, new types of challenges and limitations are emerging that go far beyond raw capability.
Rhino Linux Releases New Version: 2025.3
In the world of Linux distributions, two main approaches dominate: on one side, stable systems that are updated infrequently but offer predictability and security; on the other, rolling-release distributions that provide the latest software at the cost of occasional instability. Rhino Linux aims to bridge this divide by combining the up-to-dateness of rolling releases with the stability offered by Ubuntu as its base.
SEAL: The Harbinger of Self-Taught Artificial Intelligence
For years, the dominant belief was that human instruction—through data, labels, fine-tuning, and carefully designed interventions—was the key to advancing artificial intelligence. Today, however, a new paradigm is taking shape. In a recent breakthrough, researchers at MIT introduced SEAL (Self-Adapting Language Models), a system that allows language models to teach themselves. This is not only a technological milestone—it also raises a fundamental question: what role will humans play in the training of intelligent systems in the future?
All it takes is a photo and a voice recording – Alibaba's new artificial intelligence creates a full-body avatar from them
A single voice recording and a photo are enough to create lifelike, full-body virtual characters with facial expressions and emotions – without a studio, actor, or green screen. Alibaba's latest development, an open-source artificial intelligence model called OmniAvatar, promises to do just that. Although the technology is still evolving, it is already worth paying attention to what it enables – and what new questions it raises.
ALT Linux 11.0 Education is the foundation of Russian educational institutions
ALT Linux is a Russian-based Linux distribution built on the RPM package manager, based on the Sisyphus repository. It initially grew out of Russian localization efforts, collaborating with international distributions such as Mandrake and SUSE Linux, with a particular focus on supporting the Cyrillic alphabet.
Spatial intelligence is the next hurdle for AGI to overcome
With the advent of LLM, machines have gained impressive capabilities. What's more, their pace of development has accelerated, with new models appearing every day that make machines even more efficient and give them even better capabilities. However, upon closer inspection, this technology has only just enabled machines to think in one dimension. The world we live in, however, is three-dimensional based on human perception. It is not difficult for a human to determine that something is under or behind a chair, or where a ball flying towards us will land. According to many artificial intelligence researchers, in order for AGI, or artificial general intelligence, to be born, machines must be able to think in three dimensions, and for this, spatial intelligence must be developed.

Linux distribution updates released in the last few days