Researchers working with Google have recently developed an AI model called MusicLM. MusicLM is equipped with the capability to generate minutes-long musical pieces from text prompts. The AI was also made with the ability to transform a whistled or hummed melody into different instruments, similar to how systems like DALL-E generate images from written prompts.
Several samples of works produced by MusicLM has been uploaded by the company. The samples includes 30-second snippets of what sound like actual songs created from paragraph-long descriptions. The descriptions used to create the models include directions of the genre, vibe, and even specific instruments.
Another model include five-minute-long pieces generated from one or two words such as “melodic techno”. The model also includes musical interpretations of phrases such as “futuristic club” and “accordion death metal”.
Here’s a music made from the “story mode” prompts, where the model is basically given a script to morph between prompts: https://google-research.github.io/seanet/musiclm/examples/audio_samples/story_mode/example_2.wav
The AI model also has the ability to simulate human vocals, although the quality of the vocals is not quite perfect yet, as they sound rainy or staticky, like the one here. The technology is still in its early stages and Google has no plans to release the model at this point, citing potential risks of “potential misappropriation of creative content” and potential cultural appropriation or misrepresentation.
AI-generated music has been around for a while. There has been systems used to compose pop songs, copying Bach better than a human could in the 90s, and accompanying live performances. One recent version uses AI image generation engine, StableDiffusion, to turn text prompts into spectrograms that are then turned into music.
According to the paper on MusicLM, the model can outperform other systems in terms of its “quality and adherence to the caption,” as well as the fact that it can take in audio and copy the melody. This ability is perhaps one of the most impressive features of MusicLM. This is obvious in the demo where you can play the input audio, where someone hums or whistles a tune, then hear how the model reproduces it as an electronic synth lead, string quartet, guitar solo, etc.
Read also: 10 platforms and tools Web Developers should know about
The release of MusicLM is a significant step forward for AI-generated music. It’s impressive how the model can generate music from text prompts and even copy melodies from humming or whistling. Although the quality of the vocals is not perfect yet, it’s still a remarkable achievement. Google’s decision to hold on on the release of MusicLM is understandable, given the potential risks of “potential misappropriation of creative content” and potential cultural appropriation or misrepresentation.
In the meantime, Google has publicly released a dataset with around 5,500 music-text pairs, which could help when training and evaluating other musical AI.