The use of artificial intelligence in music composition is not a new endeavor, but real-time operation has long faced significant obstacles. The Google Magenta team has now unveiled a development that could expand both the technical and creative possibilities of the genre. The new model, called Magenta RealTime (Magenta RT for short), generates music in real time and is accessible to anyone thanks to its open source code.
The goal of the project is to bring machine-generated and live, human-created music closer together. The development is based on an 800-million-parameter transformer-based language model that works with 48 kHz stereo sound quality. The system uses a so-called neural audio codec to break music down into small, manageable pieces of sound and rebuilds the compositions generated during the process from these. An important new feature is that Magenta RT is capable of composing music even faster than it can be played back in real time, thereby minimizing latency during interactions.
From a musical control perspective, it is particularly noteworthy that the model not only responds to text commands, but is also capable of changing styles and moods based on sound samples. This dual approach—the simultaneous use of text and sound—allows users to specify the desired genre, tempo, or instrumentation, as well as to continue or change the soundscape of previously played sections.
The model operates on 2-second sound segments, which are built on a 10-second historical context. This temporal framing not only ensures technical efficiency, but also reinforces the sense of musical continuity. Magenta RT's capabilities are further enhanced by an embedding module called MusicCoCa, which can transform both text and audio-based information into a unified musical meaning.
One of the most interesting aspects of the technology is its open licensing. Under the Apache 2.0 license, Magenta RT is freely available on GitHub and the Hugging Face platform, opening up significant opportunities not only for developers but also for artists and educators. For example, the model can be used in live performances, interactive art installations, as a music education tool, or even for rapid creation of creative prototypes.
It is worth noting, however, that Magenta RT is an experimental technology that is primarily trained on instrumental music and does not yet offer complete compositional autonomy. Machine music creation remains a complement to creative human presence, not a replacement for it. However, developments are moving in the direction of increasingly direct, rapid, and nuanced collaboration between algorithms and humans.
Compared to other models, Magenta RT stands out in particular because it not only offers pre-generated music tracks, but is also capable of responding to user commands in real time. This is a significant difference compared to Google's other model, MusicLM, or Meta MusicGen, which produce the entire piece of music at once. Magenta RT's streaming-based operation thus enables a new kind of musical experimentation and interactive performance.
Google's future plans include releasing a customizable version of the model and exploring the possibility of running it on mobile devices. These developments could represent further steps toward making artificial intelligence an active part of live music creation.
Magenta RealTime is therefore not only a technological advance, but also represents a way of thinking: that artificial intelligence can be not only a tool, but also a partner in creation.