BMG has joined the growing wave of copyright lawsuits against AI companies, filing a case against Anthropic over how its chatbot Claude was trained and how it generates content.
The lawsuit, filed on March 17, alleges that Anthropic used unlicensed song lyrics from BMG’s catalog as part of its training data and continues to reproduce those lyrics in response to user prompts. According to the complaint, Claude is capable of generating substantial portions of copyrighted songs, including well-known tracks such as “Uptown Funk” and “7 Rings,” raising concerns not only about how the model was built but also about how it behaves once deployed.
The case expands on arguments that have been developing across the industry, where music companies are no longer focusing solely on training practices but are also examining the outputs of AI systems. BMG’s position is that infringement is happening at both stages, first through the inclusion of copyrighted material in training datasets and then through the generation of lyrics that closely resemble or replicate protected works.
The filing also introduces claims about how that training data was obtained. BMG alleges that Anthropic sourced lyrics from scraped databases, including licensed lyric platforms, as well as pirated materials pulled from torrent sites. That claim builds on a recent legal development in a separate case involving authors, where Anthropic faced liability tied to the storage of pirated books and later agreed to a significant settlement.
In its public statements, BMG has emphasized that its position is not against the use of AI itself, but against the use of copyrighted material without permission or compensation. The company argues that generative systems should operate within existing copyright frameworks, where rights holders are able to control how their work is used and are paid accordingly when it is incorporated into new technologies.
The financial scope of the case is substantial. BMG is seeking statutory damages of up to $150,000 per act of infringement, and with at least 467 songs identified in the complaint, the potential exposure exceeds $70 million. This places additional pressure on a sector that is already facing multiple lawsuits from publishers and record labels pursuing similar claims.
Anthropic has not issued a direct response to this specific filing, but its broader legal position aligns with other AI companies currently being challenged in court. The company has previously argued that training models on copyrighted material falls under fair use, framing the process as transformative rather than a direct substitution for the original works.
That question remains unresolved and is now being tested across multiple cases. Courts are being asked to determine whether the use of copyrighted material in AI training qualifies as fair use and whether the outputs generated by these systems cross the line into infringement. The outcome of these cases will have significant implications for how generative AI models are developed, licensed and monetized moving forward.
The lawsuit reflects a broader shift in how the music industry is responding to AI. Rather than waiting for regulatory clarity, rights holders are using litigation to define the boundaries of the technology, with each case contributing to a larger framework that will ultimately determine how AI companies interact with copyrighted material.