Magenta RealTime (Magenta RT for short) is an open-source music generation model developed by Google DeepMind with the core goal of augmenting human music composition. The model is based on the 800M-parameter Transformer architecture, and the training data contains about 190,000 hours of instrumental stock music. As an open source version of Lyria RealTime, it supports the generation of high-quality music clips via text or audio cues, suitable for live performance and soundscape creation.
The model is released under the Apache 2.0 and CC-BY 4.0 licenses, and the code and model weights are publicly available to encourage musicians and developers to explore innovative applications. Key benefits include low-latency generation (2-second music generation takes about 1.25 seconds), multimodal input support, and cross-stylistic fusion capabilities.
This answer comes from the articleMagenta RealTime: an open source model for generating music in real timeThe































