
Import elements and vision
Upload any ideas including audio clips from a project, reference tracks, reference sounds and text descriptions to guide the AI.
SoundBerry's AI generates high-quality audio loops and stems featuring specific instruments or sounds that seamlessly match the audio elements already in a project.
Try nowUpload any ideas including audio clips from a project, reference tracks, reference sounds and text descriptions to guide the AI.
Specify the desired elements and instantly generate loops and parts aligned with input elements.
Preview multiple variations, layer ideas and download what works in studio-quality audio formats, ready for use in a DAW.
Integrate AI-powered stem generation into any product or platform to introduce new use cases and attract new audiences.
Help creators move faster and finish more projects by offering intelligent tools to aid the creative process, boosting session time and driving retention.
Use a high-performance API that plugs into any stack with minimal effort, avoiding the need to develop complex infrastructure to bring AI capabilities into products or platforms.
Newly generated elements are aware of other project elements and therefore integrate seamlessly into projects.
Outputs contain the specific instruments or sounds requested without unwanted elements or artefacts.
Training data is ethically sourced by partnering with rightsholders to ensure fair compensation for creators.
Michele Baldo holds a Bachelor's degree in Computer Science and a Master's degree in Cognitive Neuroscience from the University of Trento. He conducted research with the University of Trento, the Fondazione Bruno Kessler, the Centro Interdipartimentale Mente e Cervello (CIMeC), and the Vrije Universiteit Amsterdam. He later worked as an AI Engineer at the Technology Innovation Institute in Abu Dhabi, in the Extreme Scale LLM team, contributing to the development of AI models such as Falcon 180B and Noor, focusing on evaluation, distributed optimization, checkpointing, and fine-tuning through RLHF and DPO.
Marco Comunità is a PhD candidate in Audio AI at the Centre for Digital Music at Queen Mary University of London. He has published seven first-author papers on neural synthesis and audio effects emulation. He has also conducted research at Imperial College London, Sony CSL Paris, and Sony Tokyo. He holds a Bachelor's degree in Computer Engineering and a Master's degree in Electronic Engineering from Sapienza University of Rome, as well as an MSc in Sound and Music Computing from Queen Mary University. In addition, he has worked as a designer and engineer for Blackstar Amplification.
David W. Fong holds a Bachelor's degree in Electronic and Electrical Engineering from Imperial College London. He began his career as a trader at Goldman Sachs before dedicating over five years to AI applied to music: as an R&D engineer at AI Music, project manager at Universal Music Group's REDD incubator, and product manager at DAACI, where he led the development of generative tools for musicians. He also studied violin at the Santa Cecilia Conservatory and holds a certificate in sound recording from the University of Surrey.
Supported by