UMG’s Michael Nash Called the “AI vs. Artists” Narrative a Lie. Here’s the Case He Made
The debate over whether AI will replace musicians has generated a lot of heat and, according to Universal Music Group’s top digital executive, very little accuracy. Speaking at the HumanX AI conference in San Francisco alongside Splice CEO Kakul Srivastava in a session moderated by Trapital’s Dan Runcie, UMG EVP and Chief Digital Officer Michael Nash used the platform to dismantle what he called “the false narrative of artist replacement” and lay out a sharply different framework for how the industry should think about AI’s role in music creation.
Nash traced the replacement narrative to what he described as a “deeply flawed investment thesis” that circulated in certain corners of the tech and venture capital worlds: that AI would eliminate music content supply constraints, kill the legacy industry model, and effectively jailbreak the music economy. His rebuttal was direct. “The digital transition had already effectively eliminated barriers to entry with 100,000 tracks uploaded a day four years ago,” he said, pointing out that a scarcity of musical content was never the actual problem to be solved. To make the point visceral, Nash applied what he called “cocktail napkin math”: in order to listen to all the music ever created, you would have to live 1,700 years without sleeping, and then live 17,000 more years for every year that AI models are generating new content. “A nuclear explosion in production volume of content through AI doesn’t have a market,” he said. “There’s no audience for that. It’s not addressing any kind of need.”
Nash’s preferred frame is what he called “Artist x AI,” positioning AI as a force multiplier rather than a replacement. The distinction matters because it changes the entire investment and product logic. Instead of viewing AI as a mechanism to flood the market with machine-generated content, Nash argued the real opportunity is in multiplying what individual artists can express and create, delivering “an order of magnitude advancement in creative potential for artists engaged with technology.” That framing is reflected in UMG’s actual deal-making: the company has built a growing portfolio of AI partnerships that now includes BandLab, KLAY, Udio, Stability AI, YouTube, Nvidia, and Splice, all structured around licensed, artist-consented use of music rather than unlicensed extraction. One notable absence from that list is Suno, whose licensing talks with UMG and Sony were recently reported to be at an impasse with no clear path forward.
Related Stories
Nash also pushed back on the assumption that technically superior AI music tools will automatically produce better music, framing the idea as a category error. “Would Bob Dylan’s music be better if his vocals were perfect?” he asked, before extending the question to Neil Young, Leonard Cohen, Johnny Cash, Kurt Cobain, Courtney Barnett, Lana Del Rey, and punk rock generally. The argument cuts to something fundamental about why music resonates with listeners, and Nash grounded it in both poetry and anthropology, referencing poet Robert Creeley’s idea that “art is triumph of content over form” and citing academic research suggesting that music preceded language in human development. “It all relies on the artist and the creative process and their artistic intent,” Nash said.
Srivastava added a practitioner’s perspective that reinforced Nash’s point from a different angle, drawing on direct feedback from Splice’s user base. One of the most striking signals she cited was that artists told the company its tools were “too easy” and that they wanted more creative control, not less. Her broader point was that the right creator tools for AI-assisted music do not yet exist, at least not in the way artists actually need them. “We do not have the right creator tools yet in market that truly allow musicians to tell their stories well, at least in the AI space; we don’t have them,” she said, a frank acknowledgment that the UMG and Splice partnership, announced in December 2025, represents a starting point rather than a finished product. Under that agreement, UMG artists are expected to bring their own sounds directly into Splice’s AI workflows and play an active role in shaping how the tools are built.
Both Nash and Srivastava offered five-year forecasts that converge on the same conclusion from different directions. Nash compared the current moment to the electronic revolution of a century ago and the digital transition of the early 2000s, predicting that AI will produce “a paradigmatic change” in music and the arts. He described a future where artists “populate intelligent ecosystems with incredible, navigable, hyper-personalized experiences for consumers,” driving significant growth in the overall music economy. Srivastava’s prediction was simpler and perhaps more clarifying: within five years, the question of whether music “was made with AI or not” will stop being asked entirely. “It just will be how it’s done,” she said, “just like we don’t talk about ‘is this electricity powering these lights.’” The conversation will move on. What remains to be settled in the meantime is which companies get the business model right, and on whose terms.
Related Stories
Related Stories
Related Stories
Related Stories
FTM Newsletter
Sign Up for the Weekly Flare Newsletter so they news comes to you!