Music giant Sony Music has requested the removal of more than 135,000 songs by fraudsters impersonating its artists on streaming services, the company announced at the launch of the music industry's Global Music Report in London.
The so-called deepfakes were created using generative AI and targeted some of the company's biggest acts, including Beyoncé, Queen, Harry Styles, Bad Bunny, Miley Cyrus and Mark Ronson. Sony says the proliferation of such counterfeits causes direct commercial harm to legitimate recording artists and deliberately targets musicians who are promoting new albums.
What the Left Is Saying
Progressive advocates and music industry groups on the left have called for robust government regulation of AI-generated content, arguing that artists deserve strong protections against unauthorized voice cloning. The International Federation of the Phonographic Industry (IFPI) has urged streaming platforms to implement mandatory labeling tools that can identify fake or AI-generated music at the point of upload.
"The challenge of identifying and labelling AI material is absolutely the next critical challenge," said Victoria Oakley, CEO of the IFPI. "I'm very optimistic that in the UK, they have decided to pause and think again" about allowing AI firms to train software on copyrighted works without permission.
Industry groups have welcomed the UK government's decision to pause AI copyright exemptions, seeing it as a model for protecting creative workers. Supporters argue that without proper disclosure requirements, fans cannot distinguish between genuine human creativity and unauthorized AI-generated content, which risks undermining trust in the streaming ecosystem.
What the Right Is Saying
Some conservative voices and tech industry stakeholders have expressed concerns that aggressive regulation of AI-generated music could stifle innovation and create unnecessary government intervention in the marketplace. They argue that streaming services themselves are best positioned to address fraud through existing terms of service and market mechanisms.
Others have noted that the music industry's calls for mandatory AI content labeling could set a precedent for broader government mandates on content verification online. Some free-market advocates suggest that consumer choice and platform competition will naturally drive services to offer transparency tools if users demand them.
The debate reflects broader tensions in tech policy between protecting existing rights holders and maintaining space for new creative technologies. Industry opponents of heavy-handed mandates argue that the market can self-correct if platforms face reputational pressure from users who discover they've been listening to AI-generated content.
What the Numbers Show
The IFPI's Global Music Report showed that recorded music revenues grew by 6.4% last year, reaching $31.7 billion (£23.8 billion). It was the 11th consecutive year of growth, following the industry's recovery from piracy and financial decline through streaming subscriptions.
Sony Music has identified approximately 135,000 deepfake tracks to date but believes this represents only a percentage of the total uploaded to streaming services. Since last March alone, the company has identified some 60,000 songs falsely purporting to feature artists from their roster.
Unofficially, the music industry estimates that up to 10% of content across all streaming platforms is fraudulent. Dennis Kooker, president of Sony's global digital business, noted that French streaming company Deezer found 34% of songs submitted to its service are now categorized as AI-generated.
The UK remains the world's third largest music market, while China overtook Germany as the fourth biggest, having entered the top 10 less than a decade ago. Taylor Swift was the biggest artist in the world last year, followed by K-pop band Stray Kids and Canadian rapper Drake.
The Bottom Line
The removal of 135,000 deepfake tracks highlights the growing challenge that AI-generated content poses to the music industry. As generative AI tools become cheaper and more accessible, experts expect the volume of fraudulent content to continue increasing.
The industry is now pushing for mandatory labeling requirements that would require streaming platforms to identify AI-generated music at the point of upload. Streaming services have historically been slow to adopt such measures, though some companies like Deezer have already implemented detection tools.
What remains unclear is whether governments will enact formal regulations requiring AI content disclosure or whether the market will drive voluntary adoption of transparency tools. The music industry's success in getting Sony's 135,000 deepfake tracks removed demonstrates that existing copyright enforcement mechanisms can work, but industry leaders say they are fighting an uphill battle against a rapidly evolving threat.