The Number of Organized Scientific Fraud Cases is Growing at an Alarming Rate

The world of science is built on curiosity, collaboration, and collective progress—at least in principle. In reality, however, it has always been marked by competition, inequality, and the potential for error. The scientific community has long feared that these pressures could divert some researchers from the fundamental mission of science: creating credible knowledge. For a long time, fraud appeared to be mainly the work of lone perpetrators. In recent years, however, a troubling trend has emerged: growing evidence suggests that fraud is no longer a series of isolated missteps but an organized, industrial-scale activity, according to a recent study.

The explosive spread of the internet and open-access publishing has facilitated the flow of scientific information—but has also created space for networks specializing in fraud. These “paper mills” mass-produce low-quality or outright fake research, while brokers connect fraudulent researchers with publishers.

The functioning of the scientific world can be compared to a collective “game of trust,” in which scientists, universities, funders, and society work together to expand knowledge. The system is built on good faith: researchers rely on each other’s work, publishers ensure quality control, and institutions and funders depend on professional evaluation.

However, this chain of trust can easily be broken if someone fails to uphold their part. Moreover, scientific performance is now increasingly measured by quantitative indicators—such as the h-index, journal impact factors, or university rankings. These metrics have rapidly become the main criteria for allocating resources and rewards, intensifying competition and inequality, and making the system more vulnerable to fraud.

“Scientific desertion” occurs when someone benefits from the system without conducting genuine scientific work. In a 2002 survey, 0.2% of researchers funded by the US NIH admitted to falsifying data. A more recent study, analyzing over 20,000 articles, found that 3.8% contained improperly duplicated images—half of which were intentionally manipulated. According to some publishers, up to 14% of submitted manuscripts likely originate from “paper mills.”

Recent research shows that the networks carrying out scientific fraud are large, resilient, and growing rapidly. These entities not only produce fake articles but also act as intermediaries in an extensive network where editors and authors collaborate to bypass the traditional peer-review process.

A case study analyzing data from the journal PLOS ONE revealed that certain editors were significantly more likely to accept articles that were later retracted or criticized on PubPeer, a post-publication peer-review site. The study identified 22 editors who accepted a disproportionately high number of subsequently retracted articles and 33 who did the same for articles commented on at PubPeer. The research also identified authors who conspicuously often submitted their work to these “flagged” editors. This points to collusion between editors and authors. A particularly tightly connected group of editors was found to have reviewed each other’s articles between 2020 and 2023, with more than half of the articles they accepted later being retracted. Similar patterns were found in journals from the publisher Hindawi and in IEEE conference proceedings.

The industrial scale of the fraud is also evident from falsified images. A network of 2,213 articles was mapped, linked by shared, duplicated images. The network broke down into large, interconnected clusters, suggesting the articles were produced in large batches from a common image bank. These articles typically appeared in the same journals during the same period, indicating coordinated activity between paper mills and cooperating journals. Despite the clear evidence of fraud from image duplication, only 34.1% of these articles have been retracted.

Organizations specializing in fraud are extremely adaptable. When a journal they use is de-indexed by major scientific databases (such as Scopus or Web of Science)—meaning it is removed due to quality concerns—the fraudsters simply target new outlets. This phenomenon is known as “journal hopping.”

The case of an organization called ARDA illustrates this well. ARDA advertises on its website that it can guarantee publication in certain journals. Their portfolio changes dynamically: in 2018, they offered only 14 journals; by March 2024, this number had grown to 86. When Scopus de-indexed a group of journals they used in 2020 or 2021, ARDA removed them from its portfolio in May 2021 and replaced them with new ones. Of the Scopus-indexed journals listed by ARDA, 33.3% were later de-indexed—compared to just 0.5% for all Scopus journals. This indicates that such organizations deliberately target journals with weak quality control that are still indexed.

Scientific fraud is not evenly distributed across disciplines. Certain rapidly growing and popular fields are particularly attractive targets. One study compared six closely related subfields of RNA biology. While the retraction rate for articles on CRISPR-Cas9 technology was only 0.1%, the rate was dramatically higher for circular RNAs (2.5%), for studies linking microRNAs to cancer (4%), and for research on long non-coding RNAs (lncRNA) (4%). These rates far exceed what could be explained by honest mistakes and clearly point to concentrated fraudulent activity.

Perhaps most alarming is that scientific fraud is growing much faster than legitimate scientific output. While the total number of scientific publications doubles approximately every 15 years, the number of retracted articles doubles every 3.3 years, and the number of suspicious products from paper mills doubles in just 1.5 years. This means fraud is spreading exponentially.

Current punitive measures, such as de-indexing journals, are vastly insufficient given the scale of the problem. While Scopus and Web of Science together de-index about a hundred journals annually, the number of journals publishing content from paper mills is ten times higher. Only a fraction (about 28.7%) of suspicious articles are retracted. Based on current trends, only about 25% of such articles will ever be retracted, and just 10% will appear in a de-indexed journal.

The severity of the situation is compounded by the fact that scientific fraud undermines trust in science and endangers future research. Artificial intelligence and large language models, intended to summarize and analyze scientific literature, are currently unable to distinguish genuine science from fakes. If the literature becomes saturated with fraudulent work, these technologies could unintentionally amplify and spread false information, with unforeseeable consequences. The situation demands urgent, coordinated action from all stakeholders in the scientific community. 

Share this post
Beyond the Hype: What Does GPT-5 Really Offer?
The development of artificial intelligence has accelerated rapidly in recent years, reaching a point where news about increasingly advanced models is emerging at an almost overwhelming pace. In this noisy environment, it’s difficult for any new development to stand out, as it must be more and more impressive to cross the threshold of user interest. OpenAI carries a double burden in this regard: not only must it continue to innovate, but it also needs to maintain its lead over fast-advancing competitors. It is into this tense landscape that OpenAI’s newly unveiled GPT-5 model family has arrived—eagerly anticipated by critics who, based on early announcements, expect nothing less than a new milestone in AI development. The big question, then, is whether it lives up to these expectations. In this article, we will examine how GPT-5 fits into the trajectory of AI model evolution, what new features it introduces, and how it impacts the current technological ecosystem.
The Most Popular Theories About the Impact of AI on the Workplace
Since the release of ChatGPT at the end of 2022, the field of AI has seen impressive developments almost every month, sparking widespread speculation about how it will change our lives. One of the central questions concerns its impact on the workplace. As fears surrounding this issue persist, I believe it's worth revisiting the topic from time to time. Although the development of AI is dramatic, over time we may gain a clearer understanding of such questions, as empirical evidence continues to accumulate and more theories emerge attempting to answer them. In this article, I’ve tried to compile the most relevant theories—without claiming to be exhaustive—as the literature on this topic is expanding by the day. The question remains: can we already see the light at the end of the tunnel, or are we still heading into an unfamiliar world we know too little about?
OpenAI’s Study Mode: Teaching Students How to Think
In recent years, artificial intelligence has sparked revolutionary changes in education, shifting the focus from passive information intake to active learning processes aimed at deeper understanding.
Linux Kernel 6.16 Released
Linux Kernel 6.16 has been released. While the release process was, in the developers’ words, “uneventful” in the best possible sense, significant improvements lie beneath the surface, bringing progress in areas such as security, performance, and system management. Meanwhile, development of the upcoming 6.17 version has started in a more chaotic manner than usual—highlighting the human side of one of the world’s most important open-source projects.
Anticipation is high for the release of GPT-5 — but what should we really expect?
OpenAI’s upcoming language model, GPT-5, has become one of the most anticipated technological developments in recent months. Following the release of GPT-4o and the specialized o1 models, attention is now shifting to this next-generation model, which—according to rumors and hints from company leaders—may represent a significant leap forward in artificial intelligence capabilities. But what do we actually know so far, and what remains pure speculation?
China launches human trials of brain-computer interface
China has launched the first human clinical trials of a brain-computer interface that directly connects to the brain. This so-called invasive technology, which allows thought control through a device implanted in the brain, has so far been mainly limited to research laboratories. Now, for the first time, it is being tested on humans in China, using a device developed by the Centre of Excellence for Brain Sciences and Intelligent Technologies in Shanghai.