Musicians are finding real value in AI production tools. So why don't they trust them?

Ask a room full of working musicians whether they use AI in their production process and most hands will go up. Ask whether they fully trust the companies behind those tools, and the room is quieter.

Multiple surveys in 2025 and early 2026 point to the same picture. Close to 60% of musicians now use AI tools in their production workflow. Professional musicians are adopting AI at higher rates than hobbyists. Among musicians earning income from their work, the economic upside of using these tools outweighs the downside by a significant margin.

Conversely, a PRS for Music survey found that 79% of musicians are worried about AI-generated music competing with human-created music. 92% said AI tools should be transparent about where they source training material. A Deezer and Ipsos study across 9,000 people in eight countries found that 73% consider it unethical for AI companies to use copyrighted material without artist approval.

High adoption and deep unease are co-existing in this space.

Where the mistrust comes from

The anxiety is not irrational. It has a very specific origin.

Generative AI platforms produce new music by training on vast libraries of existing recordings. In most cases, the artists whose work was used had no say in the matter, received no compensation, and were not told it was happening. Major labels have filed lawsuits against platforms. Artists have spoken out. Legislation is being debated in multiple countries.

This is a genuine ethical problem that the music industry is right to take seriously.

But the story that has taken hold in public perception is broader than the facts warrant. "AI music tools" has become a single category in the minds of many musicians, covering everything from platforms that generate full tracks from scraped data to tools that help a producer balance the mix they have spent weeks writing and recording. These are not the same thing.

The conversation needed to make that distinction has largely not happened. And in its absence, a very reasonable anger about one category of tool has attached itself to all of them.

Why assistive AI is a different conversation entirely

When a producer uploads their stems to a mixing tool, like Automix by RoEx, something quite specific is happening. Their music is being processed based on learnings and best practices of music production. No one else's music is involved. The AI is not learning from other artists' recordings. It is not generating content that competes with human creators. It is doing the technical work of balancing levels, shaping frequencies, managing dynamics, and returning a mix and master ready for release, or a project file for a DAW that can be opened and act as the basis for the finishing creative touches the mix may require.

The musician's creative decisions are still the ones that matter. The arrangement, the sounds, the feel of the track, the choices that make it theirs. The AI handles the technical execution, whilst the human keeps the authorship.

This is categorically different from a system that ingests millions of copyrighted songs and produces new music from them. The ethical obligations are different. The risks are different. The relationship between the tool and the artist's work is different.

The dominant use cases among AI-using musicians reflect this. Stem separation, mix assistance, and audio processing consistently outpace full track generation by large margins. Musicians are not, in the main, asking AI to create for them. They are asking it to help them create better.

What the industry has got wrong

The distinction between generative and assistive AI is rarely explained in the places where musicians would actually read it - on product pages, service onboarding, in the language used to describe how the tools work.

The result is predictable - musicians cannot confidently answer basic questions about what happens to their music when they upload it. That uncertainty accumulates. It shapes purchasing decisions, cancellation reasons, and the conversations producers have with each other.

At RoEx, we have always been committed to the same principles. We do not use uploaded audio to train our models. When a musician processes their work through Automix, ownership remains unchanged. Our mix reports explain exactly what was done to a track and why, in plain language. Nothing operates as a black box. We've written in detail about how we approach this.

These should be standard expectations. But they are not.

What needs to change

The musicians most likely to be long-term, committed users of AI production tools are the ones who care most about their work. They are also the ones paying closest attention to how these tools are built and who is building them.

Building their trust requires a few things that are not technically difficult.

Plain-language data policies that explain clearly what happens to uploaded audio, written for musicians rather than compliance teams. Transparent explanations of how AI processing decisions are made. An explicit public commitment not to train models on user content without informed consent. And a willingness to draw the distinction between assistive and generative AI clearly and consistently - in marketing, in onboarding, and in product design.

The tools that help musicians make better music already exist. The trust infrastructure that should sit around them is still catching up. Getting that right is not just the ethical thing to do. It is what makes the long-term relationship between AI tools and the people who make music sustainable.

Musicians deserve to know exactly what they are working with. The companies that make that answer easy to find will be the ones still standing when the dust settles.

David Ronan is the CEO and founder of RoEx, who build AI-powered mixing, mastering and analysis tools that are built on research from Queen Mary University of London.