The music industry is facing an unprecedented crisis, and Sony Music's latest numbers prove just how serious the problem has become. At the IFPI Global Music Report event, Dennis Kooker, president of global digital business at Sony Music, revealed a staggering statistic: the company has requested takedowns of more than 135,000 generative AI deepfakes of its artists' music.
That's not a typo. One hundred and thirty-five thousand fake recordings designed to sound like the artists Sony Music represents.
The scale of this problem is difficult to comprehend. Each of those takedown requests represents a deepfake that somehow made it onto the internet—likely on streaming platforms, social media, or other digital spaces where fans might stumble upon them. These aren't just minor inconveniences; they represent a fundamental threat to the music industry's creative ecosystem and the livelihoods of the artists who depend on it.
**Why This Matters**
Generative AI has made it easier than ever to create convincing audio deepfakes. With just a small sample of an artist's voice, modern AI tools can generate entirely new songs that sound authentically like that artist. While the technology itself is neutral—there are legitimate creative uses—the unauthorized creation and distribution of fake tracks undermines artists' rights and can spread misinformation at scale.
When a deepfake of a famous artist goes viral, several problems emerge simultaneously. Fans might inadvertently listen to unauthorized content, streaming royalties get fragmented across fake tracks, and the artist's reputation and creative control are compromised. The sheer volume of fakes can also make it nearly impossible for fans to distinguish between genuine new releases and AI forgeries.
**The Bigger Picture**
Sony Music's disclosure is significant because it offers rare transparency about the scope of the deepfake problem. While other major labels are undoubtedly dealing with similar issues, Sony's willingness to share these numbers highlights the urgency of the situation. The fact that they're actively requesting takedowns shows they're taking action, but the ever-growing numbers suggest they're fighting a losing battle against algorithmic content generation.
This revelation comes at a critical time for music industry regulation. Governments and streaming platforms are increasingly being asked to develop stronger protections against unauthorized AI-generated content. The question isn't whether AI will be used to create music—it will be—but rather how the industry can maintain control over its own creative output while the technology continues to evolve.
**What Comes Next?**
The music industry's response has been multi-pronged: takedown requests, advocacy for stronger regulations, and calls for AI companies to build safeguards into their platforms. However, with 135,000 deepfakes already created and technology that makes creating more easier every day, reactive takedowns may not be enough.
As Sony Music's numbers demonstrate, the deepfake crisis is no longer theoretical—it's happening right now, at massive scale. The music industry's next move could determine whether artists retain control of their own voices or whether AI-generated impersonations become an unavoidable part of the digital music landscape.
No comments yet. Be the first!