Revolutionary AI Memory System Unveiled

Large Language Models (LLMs) are central to the pursuit of Artificial General Intelligence (AGI), yet they currently face considerable limitations concerning memory management. Contemporary LLMs typically depend on knowledge embedded within their fixed weights and a limited context window during operation, which hinders their ability to retain or update information over extended periods. While approaches such as Retrieval-Augmented Generation (RAG) integrate external knowledge, they frequently lack a structured approach to memory. This often results in issues like the forgetting of past interactions, reduced adaptability, and isolated memory across different platforms. Essentially, current LLMs do not treat memory as a persistent, manageable, or shareable resource, which constrains their practical utility.

In response to these challenges, researchers from MemTensor (Shanghai) Technology Co., Ltd., Shanghai Jiao Tong University, Renmin University of China, and the Research Institute of China Telecom have collaboratively developed MemOS. This novel memory operating system positions memory as a fundamental resource within language models. A key component of MemOS is MemCube, a unified abstraction that oversees parametric, activation, and plaintext memory. MemOS facilitates structured, traceable, and cross-task memory handling, allowing models to continuously adapt, internalize user preferences, and maintain consistent behavior. This represents a significant shift, transforming LLMs from static generators into dynamic, evolving systems capable of long-term learning and coordination across various platforms.

As AI systems become increasingly complex, handling a diverse range of tasks, roles, and data types, language models must advance beyond mere text comprehension to encompass memory retention and continuous learning. Current LLMs’ deficiency in structured memory management restricts their capacity for adaptation and growth over time. MemOS addresses this by treating memory as a core, schedulable resource, enabling long-term learning through structured storage, version control, and unified memory access. Unlike conventional training methodologies, MemOS supports a continuous "memory training" paradigm, which blurs the distinction between learning and inference. Furthermore, it incorporates governance features, ensuring traceability, access control, and secure utilization within evolving AI systems.

MemOS is designed as a memory-centric operating system for language models, conceptualizing memory not merely as stored data but as an active, evolving element of the model’s cognitive processes. It categorizes memory into three distinct types: Parametric Memory, which encompasses knowledge encoded in model weights through pretraining or fine-tuning; Activation Memory, referring to temporary internal states such as KV caches and attention patterns utilized during inference; and Plaintext Memory, which consists of editable and retrievable external data, including documents or prompts. These memory types interact within a unified framework known as the MemoryCube (MemCube), which encapsulates both content and metadata. This enables dynamic scheduling, versioning, access control, and transformations across memory types. This structured system empowers LLMs to adapt, recall relevant information, and efficiently evolve their capabilities, moving beyond their role as static generators.

MemOS operates on a three-layer architecture: the Interface Layer processes user inputs and converts them into memory-related tasks; the Operation Layer manages the scheduling, organization, and evolution of different memory types; and the Infrastructure Layer ensures secure storage, access governance, and collaboration among agents. All interactions within MemOS are facilitated through MemCubes, ensuring traceable, policy-driven memory operations. Through integrated modules such as MemScheduler, MemLifecycle, and MemGovernance, MemOS sustains a continuous and adaptive memory loop—from the initial user prompt, through memory injection during reasoning, to the storage of useful data for future application. This architectural design not only enhances the model’s responsiveness and personalization but also ensures that memory remains structured, secure, and reusable.

In summary, MemOS is a memory operating system that positions memory as a central and manageable component within LLMs. In contrast to traditional models that predominantly rely on static model weights and short-term runtime states, MemOS introduces a unified framework for managing parametric, activation, and plaintext memory. Its core is MemCube, a standardized memory unit that supports structured storage, lifecycle management, and task-aware memory augmentation. This system facilitates more coherent reasoning, enhanced adaptability, and improved cross-agent collaboration. Future objectives for MemOS include enabling memory sharing across models, the development of self-evolving memory blocks, and the establishment of a decentralized memory marketplace to support continuous learning and intelligent evolution. 

Share this post
Where is Artificial Intelligence Really Today?
The development of artificial intelligence has produced spectacular and often impressive results in recent years. Systems like ChatGPT can generate natural-sounding language, solve problems, and in many tasks, even surpass human performance. However, a growing number of prominent researchers and technology leaders — including John Carmack and François Chollet — caution that these achievements don’t necessarily indicate that artificial general intelligence (AGI) is just around the corner. Behind the impressive performances, new types of challenges and limitations are emerging that go far beyond raw capability.
Rhino Linux Releases New Version: 2025.3
In the world of Linux distributions, two main approaches dominate: on one side, stable systems that are updated infrequently but offer predictability and security; on the other, rolling-release distributions that provide the latest software at the cost of occasional instability. Rhino Linux aims to bridge this divide by combining the up-to-dateness of rolling releases with the stability offered by Ubuntu as its base.
SEAL: The Harbinger of Self-Taught Artificial Intelligence
For years, the dominant belief was that human instruction—through data, labels, fine-tuning, and carefully designed interventions—was the key to advancing artificial intelligence. Today, however, a new paradigm is taking shape. In a recent breakthrough, researchers at MIT introduced SEAL (Self-Adapting Language Models), a system that allows language models to teach themselves. This is not only a technological milestone—it also raises a fundamental question: what role will humans play in the training of intelligent systems in the future?
All it takes is a photo and a voice recording – Alibaba's new artificial intelligence creates a full-body avatar from them
A single voice recording and a photo are enough to create lifelike, full-body virtual characters with facial expressions and emotions – without a studio, actor, or green screen. Alibaba's latest development, an open-source artificial intelligence model called OmniAvatar, promises to do just that. Although the technology is still evolving, it is already worth paying attention to what it enables – and what new questions it raises.
ALT Linux 11.0 Education is the foundation of Russian educational institutions
ALT Linux is a Russian-based Linux distribution built on the RPM package manager, based on the Sisyphus repository. It initially grew out of Russian localization efforts, collaborating with international distributions such as Mandrake and SUSE Linux, with a particular focus on supporting the Cyrillic alphabet.
Spatial intelligence is the next hurdle for AGI to overcome
With the advent of LLM, machines have gained impressive capabilities. What's more, their pace of development has accelerated, with new models appearing every day that make machines even more efficient and give them even better capabilities. However, upon closer inspection, this technology has only just enabled machines to think in one dimension. The world we live in, however, is three-dimensional based on human perception. It is not difficult for a human to determine that something is under or behind a chair, or where a ball flying towards us will land. According to many artificial intelligence researchers, in order for AGI, or artificial general intelligence, to be born, machines must be able to think in three dimensions, and for this, spatial intelligence must be developed.

Linux distribution updates released in the last few days