AI is reshaping music streaming—both as a tool for innovation and a weapon for fraud. While bad actors use AI to manipulate streams and create deepfake artists, platforms are deploying AI-driven defenses to fight back. Is AI the problem, solution, or both?
The music industry has long been shaped by technological disruption, but few innovations have had as profound an impact as artificial intelligence.
AI is transforming how music is created, distributed, and consumed—but it is also fueling an escalating battle over fraud.
Streaming manipulation—committed, for example, by creating fake artist profiles and couching fraudulent streams amongst hundreds or thousands of AI-generated tracks—is diverting billions of dollars from legitimate creators.
At the same time, AI-driven fraud detection tools are becoming the frontline defense for digital service providers (DSPs).
As AI is used to flood streaming platforms with synthetic music and facilitate fraudulent activity, the industry is left grappling with the question: Can AI truly help clean up the fraud problem it exacerbated?
What we cover
The rise of AI-generated music and streaming fraud
The rapid proliferation of AI-generated content is reshaping the streaming landscape. Deezer, for example, reports that roughly 10,000 AI-generated tracks are uploaded to its platform every day as of January 2025, accounting for about 10% of daily uploads. While AI-generated music offers new creative opportunities, it has also lowered the barrier to entry for fraudsters looking to exploit streaming platforms.
What makes AI-driven music fraud so pernicious is how easily it can be scaled.
Scammers no longer need a catalog of songs dug up from obscure albums that had yet to be digitized or human-operated “click farms” – they can deploy generative AI to pump out endless fake tracks, then script bots to stream them 24/7.
AI voice clones and metadata problems
Streaming fraud isn’t just about bots gaming the system—it extends to AI-generated deepfakes that mimic real artists. In April 2023, a track called heart on my sleeve featuring AI-generated vocals of Drake and The Weeknd went viral on TikTok, accumulating hundreds of thousands of streams before being pulled from platforms. Frank Ocean fans fell victim to a similar scam when an individual sold fake AI-generated songs as leaked tracks, netting over $13,000 CAD from deceived fans. The increasing sophistication of AI voice cloning poses a significant challenge to artists’ rights and revenue streams.
In another example of how fake music targets real artists, a 2024 investigation by The Verge revealed that fraudsters were uploading AI-generated tracks to legitimate artists’ Spotify pages by manipulating metadata, tricking listeners into streaming tracks that the artist in question had no part in. Because the money an artist receives for streams moves through a distributor, any royalties gained from these fake tracks go to the people uploading them—not the legitimate artist.
How one man used AI to get $10M in fraudulent royalty payouts
2024 marked the first criminal prosecution of a streaming fraud scheme driven by AI-generated content in the US, setting a precedent for treating such cases as serious federal crimes.
Michael Smith and his co-conspirators used AI to orchestrate a massive seven-year-long music fraud scheme that would eventually be investigated by the FBI. They employed AI tools to create an enormous catalog of fake songs, releasing them under fabricated artist names. To generate revenue, Smith employed bot networks that streamed these tracks in high volumes. At its peak, his operation used over 1,000 bot accounts, earning approximately $3,300 per day—totaling over $1.2 million per year.
To avoid detection, Smith distributed streams across thousands of tracks and accounts, preventing any single song from appearing suspicious. Internal emails show Smith explicitly recognized this strategy: “We need to get a TON of songs fast to make this work around the anti-fraud policies these guys are all using now,” he wrote in late 2018.
By using VPNs to route the bots’ traffic through different IP addresses and locations, he made the automated streams look geographically distributed and more like real user behavior.
Smith’s scheme was made possible—and lucrative—by the royalty payout model that many DSPs use.
The pro rata royalty payout model
The incentive for fraud is deeply embedded in the pro rata royalty model, which pools all of the net revenue generated from subscriptions and ads and distributes it based on total streaming share. This system has been the subject of much critique, as it enables manipulation at scale— any royalties claimed by bad actors are royalties siphoned away from the pool that legitimate artists are earning from.
A striking early example of criticism came in 2014 when the band Vulfpeck released an album called Sleepify, consisting entirely of silent 31-second tracks. They encouraged fans to stream the album on repeat, earning $20,000 before Spotify shut it down a few weeks later. The band used the proceeds from their critique of the Spotify billing model to fund a concert tour with free tickets.
Fraud doesn’t just affect payouts—AI-generated fake streams can distort trending charts, algorithmic recommendations, and even marketing strategies based on supposed audience engagement, further harming legitimate creators.
The battle against streaming fraud: AI as the enforcer
As fraud techniques evolve, so do the industry’s countermeasures. DSPs and digital distributors are turning to AI-powered fraud detection to identify and shut down fraudulent activity.
Deezer’s “Radar” technology served as the foundation for its fraudulent content detection by scanning catalogs for unusual streaming patterns and identifying manipulated tracks, even when the signal is distorted or the tempo has been changed. In December 2024, the company filed two patents for a new AI detection tool that it claims surpasses the ability of available tools as it’s more robust and trained to detect tracks created by a number of generative AI models.
In the “Fake Drake” incident mentioned earlier, advanced voice recognition tools identified the synthetic nature of the vocals within 48 hours of the track’s release, and led to widespread updates in content ID systems across platforms, including Spotify and YouTube.
Some copyright infringers even introduce low-level ambient sounds, white noise, or other imperceptible distortions in an effort to avoid detection. However, AI-powered detection models are evolving in response, learning to spot these alterations and flagging manipulated tracks with increasing accuracy.
Beyond the in-house efforts of streaming companies, a new crop of anti-fraud startups has emerged.
Beatdapp, for instance, has built an auditing platform that detects irregular listening patterns at massive scale. By analyzing trillions of data points from streaming records, Beatdapp’s algorithms can flag anomalies—say, a user that has five devices, two of which are in different locations from the other three, listening to the same thing as 10,000 other devices, indicating a real user whose account has been compromised. Beatdapp says it analyzed over 2 trillion streams and 20 trillion data points in search of fraud in 2023 alone.
The future of AI, fraud, and the music industry
With AI-generated music and fraud detection locked in a technological arms race, the future of streaming remains uncertain. Legal battles over AI’s role in copyright infringement are heating up, with major record labels filing lawsuits against AI music startups like Suno and Udio for allegedly training their models on copyrighted works without permission. The debate over whether AI-generated works should receive copyright protection—or if AI itself can be an author—remains unresolved.
Alternative royalty distribution models could help curb fraud by making manipulation less lucrative. User-centric payout models, where royalties are distributed based on individual listener subscriptions rather than overall stream counts, have been proposed as a way to reduce fraudulent activity.
Tamper-proof watermarking technologies that would tag whether a tack was made by a human, AI, or a combination of both are another proposed piece of the solution.
Government intervention is also taking shape through proposed legislation like the No AI FRAUD Act and the NO FAKES Act, both of which aim to establish federal protections against unauthorized AI-generated likenesses and deepfakes. While these bills primarily address identity theft and voice cloning, they signal a broader push for AI regulation within the music industry.
Ultimately, the success of AI in combating streaming fraud will depend on how swiftly platforms, regulators, and artists adapt to the evolving tactics of fraudsters. The fight is ongoing, but with continued innovation and collaboration, the industry can work toward a fairer, fraud-free future for digital music.
Stop music fraud from ever being paid out
Fraudsters are getting more sophisticated, but Know Your Artist (KYA) verification is one key to stopping fraudulent payouts before they happen.
Trolley Trust helps music businesses protect their royalty distributions with secure live image capture and ID verification, watchlist screening, and risk-based workflows—ensuring funds are paid only to the verified rights holders.
With Trolley’s end-to-end platform, you can:
✅ Verify artist identities through automated verification of documentation against 11,000+ official government ID templates from more than 200 countries, live photo validation, and TIN validation.
✅ Screen every payee against global watchlists to prevent payouts to bad actors.
✅ Reduce risk with customized workflows based on Trolley-assigned risk scores for every recipient.
✅ Streamline global payouts to 210+ countries while ensuring that real artists get their due, and bad actors never see a payout.
The result? Secure, seamless royalty payments that protect your business, artists, and reputation from fraudulent activity.
Want to reinforce your royalty payout process with the industry’s strongest KYA toolset?
Leading music companies like SoundCloud, United Masters, CD Baby, Soundrop, and more trust Trolley to pay over 1.5 million musicians worldwide. Learn how Trolley can help you transform your royalty payouts and stay ahead of fraud—get in touch today.