Revolutionary AI Memory System Unveiled

Large Language Models (LLMs) are central to the pursuit of Artificial General Intelligence (AGI), yet they currently face considerable limitations concerning memory management. Contemporary LLMs typically depend on knowledge embedded within their fixed weights and a limited context window during operation, which hinders their ability to retain or update information over extended periods. While approaches such as Retrieval-Augmented Generation (RAG) integrate external knowledge, they frequently lack a structured approach to memory. This often results in issues like the forgetting of past interactions, reduced adaptability, and isolated memory across different platforms. Essentially, current LLMs do not treat memory as a persistent, manageable, or shareable resource, which constrains their practical utility.

In response to these challenges, researchers from MemTensor (Shanghai) Technology Co., Ltd., Shanghai Jiao Tong University, Renmin University of China, and the Research Institute of China Telecom have collaboratively developed MemOS. This novel memory operating system positions memory as a fundamental resource within language models. A key component of MemOS is MemCube, a unified abstraction that oversees parametric, activation, and plaintext memory. MemOS facilitates structured, traceable, and cross-task memory handling, allowing models to continuously adapt, internalize user preferences, and maintain consistent behavior. This represents a significant shift, transforming LLMs from static generators into dynamic, evolving systems capable of long-term learning and coordination across various platforms.

As AI systems become increasingly complex, handling a diverse range of tasks, roles, and data types, language models must advance beyond mere text comprehension to encompass memory retention and continuous learning. Current LLMs’ deficiency in structured memory management restricts their capacity for adaptation and growth over time. MemOS addresses this by treating memory as a core, schedulable resource, enabling long-term learning through structured storage, version control, and unified memory access. Unlike conventional training methodologies, MemOS supports a continuous "memory training" paradigm, which blurs the distinction between learning and inference. Furthermore, it incorporates governance features, ensuring traceability, access control, and secure utilization within evolving AI systems.

MemOS is designed as a memory-centric operating system for language models, conceptualizing memory not merely as stored data but as an active, evolving element of the model’s cognitive processes. It categorizes memory into three distinct types: Parametric Memory, which encompasses knowledge encoded in model weights through pretraining or fine-tuning; Activation Memory, referring to temporary internal states such as KV caches and attention patterns utilized during inference; and Plaintext Memory, which consists of editable and retrievable external data, including documents or prompts. These memory types interact within a unified framework known as the MemoryCube (MemCube), which encapsulates both content and metadata. This enables dynamic scheduling, versioning, access control, and transformations across memory types. This structured system empowers LLMs to adapt, recall relevant information, and efficiently evolve their capabilities, moving beyond their role as static generators.

MemOS operates on a three-layer architecture: the Interface Layer processes user inputs and converts them into memory-related tasks; the Operation Layer manages the scheduling, organization, and evolution of different memory types; and the Infrastructure Layer ensures secure storage, access governance, and collaboration among agents. All interactions within MemOS are facilitated through MemCubes, ensuring traceable, policy-driven memory operations. Through integrated modules such as MemScheduler, MemLifecycle, and MemGovernance, MemOS sustains a continuous and adaptive memory loop—from the initial user prompt, through memory injection during reasoning, to the storage of useful data for future application. This architectural design not only enhances the model’s responsiveness and personalization but also ensures that memory remains structured, secure, and reusable.

In summary, MemOS is a memory operating system that positions memory as a central and manageable component within LLMs. In contrast to traditional models that predominantly rely on static model weights and short-term runtime states, MemOS introduces a unified framework for managing parametric, activation, and plaintext memory. Its core is MemCube, a standardized memory unit that supports structured storage, lifecycle management, and task-aware memory augmentation. This system facilitates more coherent reasoning, enhanced adaptability, and improved cross-agent collaboration. Future objectives for MemOS include enabling memory sharing across models, the development of self-evolving memory blocks, and the establishment of a decentralized memory marketplace to support continuous learning and intelligent evolution. 

Share this post
After a Historic Turn, SK Hynix Becomes the New Market Leader in the Memory Industry
For three decades, the name Samsung was almost synonymous with leadership in the DRAM market. Now, however, the tables have turned: in the first half of 2025, South Korea’s SK Hynix surpassed its rival in the global memory industry for the first time, ending a streak of more than thirty years. This change signifies not just a shift in corporate rankings but also points to a deeper transformation across the entire semiconductor industry.
The Number of Organized Scientific Fraud Cases is Growing at an Alarming Rate
The world of science is built on curiosity, collaboration, and collective progress—at least in principle. In reality, however, it has always been marked by competition, inequality, and the potential for error. The scientific community has long feared that these pressures could divert some researchers from the fundamental mission of science: creating credible knowledge. For a long time, fraud appeared to be mainly the work of lone perpetrators. In recent years, however, a troubling trend has emerged: growing evidence suggests that fraud is no longer a series of isolated missteps but an organized, industrial-scale activity, according to a recent study.
Beyond the Hype: What Does GPT-5 Really Offer?
The development of artificial intelligence has accelerated rapidly in recent years, reaching a point where news about increasingly advanced models is emerging at an almost overwhelming pace. In this noisy environment, it’s difficult for any new development to stand out, as it must be more and more impressive to cross the threshold of user interest. OpenAI carries a double burden in this regard: not only must it continue to innovate, but it also needs to maintain its lead over fast-advancing competitors. It is into this tense landscape that OpenAI’s newly unveiled GPT-5 model family has arrived—eagerly anticipated by critics who, based on early announcements, expect nothing less than a new milestone in AI development. The big question, then, is whether it lives up to these expectations. In this article, we will examine how GPT-5 fits into the trajectory of AI model evolution, what new features it introduces, and how it impacts the current technological ecosystem.
The Most Popular Theories About the Impact of AI on the Workplace
Since the release of ChatGPT at the end of 2022, the field of AI has seen impressive developments almost every month, sparking widespread speculation about how it will change our lives. One of the central questions concerns its impact on the workplace. As fears surrounding this issue persist, I believe it's worth revisiting the topic from time to time. Although the development of AI is dramatic, over time we may gain a clearer understanding of such questions, as empirical evidence continues to accumulate and more theories emerge attempting to answer them. In this article, I’ve tried to compile the most relevant theories—without claiming to be exhaustive—as the literature on this topic is expanding by the day. The question remains: can we already see the light at the end of the tunnel, or are we still heading into an unfamiliar world we know too little about?
A Brutal Quarter for Apple, but What Comes After the iPhone?
Amid global economic and trade challenges, Apple has once again proven its extraordinary market power, surpassing analyst expectations in the third quarter of its 2025 fiscal year. The Cupertino giant not only posted record revenue for the period ending in June but also reached a historic milestone: the shipment of its three billionth iPhone. This achievement comes at a time when the company is grappling with the cost of punitive tariffs, intensifying competition in artificial intelligence, and a series of setbacks in the same field.
The Micron 9650: The World's First Commercial PCIe 6.0 SSD
In the age of artificial intelligence and high-performance computing, data speed has become critically important. In this rapidly accelerating digital world, Micron has announced a technological breakthrough that redefines our concept of data center storage. Enter the Micron 9650, the world’s first SSD equipped with a PCIe 6.0 interface—not just another product on the market, but a herald of a new era in server-side storage, offering unprecedented speed and efficiency.