The Rise of Fake Streams: AI-Generated Music Fraud on Streaming Platforms
The Troubling Trend in AI Music
As artificial intelligence continues to reshape the music landscape, a troubling phenomenon is emerging within the streaming industry. Recent findings from Deezer, the French music platform, reveal that as much as 70% of streams associated with AI-generated music are likely the result of fraudulent activity. While AI-generated tracks account for a mere 0.5% of total streams on Deezer, the prevalence of fake listening practices is raising alarm bells among industry insiders.
How the Fraud Works
Fraudsters are exploiting AI music to generate substantial revenue through deceptive means. They deploy bots to artificially inflate listen counts for AI-produced songs, thereby harvesting royalty payments that rightfully belong to legitimate artists. This manipulation relies on drawing large volumes of fake listens to a handful of tracks, circumventing the platform’s detection systems. As Thibault Roucou, Deezer’s director of royalties, explains, the aim is simple: to “get some money from royalties.”
This tactic is reminiscent of earlier digital fraud trends, such as click fraud in online advertising, hinting at a broader battle against financially motivated deception in the digital age.
Investment in Detection Technology
To combat this escalating threat, Deezer is investing in advanced technologies designed to detect 100% AI-generated content from leading AI music generators, including models like Suno and Udio. The platform’s analysis suggests that the variety of AI music being manipulated ranges from generic pop and rap tracks to mood playlists crafted for background ambiance.
In a broader context, the growth of AI-generated content on streaming services is staggering—Deezer revealed that such tracks now constitute 18% of daily uploads, translating to around 20,000 tracks every single day. In response, Deezer has begun to purge fully AI-generated songs from its algorithmic recommendations, aiming to preserve the integrity of its platform.
Implications for Artists and the Industry
This fraudulent activity not only diminishes revenue streams that should go to legitimate creators but also puts a strain on the industry as a whole. According to the International Federation of Phonographic Industry (IFPI), the global music market—valued at $20.4 billion last year—becomes a hotbed for scammers looking to capitalize on the rise of generative AI.
Earlier this year, U.S. musician Michael Smith faced criminal charges tied to a scheme wherein he allegedly produced hundreds of thousands of AI-generated tracks that amassed billions of streams, earning him $10 million in royalties. Such schemes underscore not just the growing threat posed by AI in music but also the pressing need for regulatory frameworks to protect artists.
Navigating the Future of AI in Music
The concern surrounding AI-generated content and its misuse reflects a significant moment in the evolution of music and technology. As streaming platforms strive to differentiate genuine artistry from artificial fakes, the industry’s stakeholders—platforms, creators, and consumers alike—must navigate a rapidly shifting landscape. Detailed efforts to thwart fraudulent practices are vital to ensure that both the artists and the integrity of the music ecosystem remain intact.
In summary, while AI-driven innovations present exciting opportunities for creativity, they also introduce new challenges that require ongoing vigilance and innovation from all involved. The future of music could very well hinge on how adeptly the industry addresses these pressing issues.

Writes about personal finance, side hustles, gadgets, and tech innovation.
Bio: Priya specializes in making complex financial and tech topics easy to digest, with experience in fintech and consumer reviews.