Home Computer Meta introduces MusicGen, the open-source AI that creates music from sounds and...

Meta introduces MusicGen, the open-source AI that creates music from sounds and text

Con MusicGenMeta tries to make its mark in the race to thegenerative artificial intelligence. In recent years users have seen it being born algorithms based on AI and tools capable of realizing i task more disparate: starting from ChatGPT and its ability to translate text input into written output. Up to the models dedicated to creating images and videos.

Also MusicGen is a language model that leverages deep learning. But, unlike the AIs listed above, it aims to generate unreleased music. The user enters his request textual and the software turns it into a melody.

A very interesting element of MusicGen is that its model is entirely open source. This means that developers from all over the world will be able to study it and maybe use it for build even more advanced AI tools.

How does MusicGen

The model on which MusicGen is based is the same Generative Pre-trained Transformer brought to success by the team of OpenAI with the aforementioned ChatGPT.

To train the model the developers made use of more or less 20,000 hours of pre-existing music. Half of this archive consists of tracks offered under license, uploaded in high quality. The other half is made up of tracks from Pond5 and Shutterstock.

Pond5 is an online store expressly dedicated to royalty-free media: her libraries they are composed of photographs and videos, but also of music and sound effects. Shutterstock instead, it is a dedicated platform for uploading and downloading multimedia content of all kinds.

MusicGen was made by a division of Meta named Audiocraft. The team was able to take advantage of the tokenizer audio proprietario EnCodec a 32Khz. Thanks to this tool it was possible to process in parallel moremusical blocks, of reduced size and weight.

To test the potential of MusicGen of Meta is possible visit the Hugging Face website. But also run the process locallychoosing one of the three available model versions: the one from 300 million parametersthe one from 1.5 billion parameters and that from 3.3 billion parameters.

Whichever model you choose, we still recommend running MusicGen on a device with certain characteristics basis. First of all, attention should be paid to the GPUwhich should at least count on 16 GB in RAM.

How to use MusicGen

The usage of Music Gen di Meta it is not all that different from that of the now well known ChatGPT. In fact, the user only has to write a request, trying to describe in as much detail as possible what kind of music would like to hear.

As with ChatGPT, even with MusicGen the only limit in this sense is the imagination of the writer. You can ask a piece that recalls the style of American Hip Hop 90’s. But it is also possible to load a much more specific input: for example a song inspired by the metal genrebut which involves the use of Caribbean instruments.

The artificial intelligence accepts and processes the request, to then return an unreleased song. In this first stage MusicGenmake very short pieces: Average length is 12 seconds. Furthermore the output is not always true to specification entered by the user.

At the same time the MusicGen model is considered really better than Google’sfor at least two reasons. First, it does not require a self-supervised semantic representation. And then, in this moment, it has only 50 steps of automatic regression for every second of audio.

to know more: Artificial Intelligence, what it is and what it can do for us

to know more: Social Media, the list of the main social media

Please visit our website for more information


Please enter your comment!
Please enter your name here