Spotify has removed 75 million tracks from its platform as part of a significant policy update aimed at combating fraudulent activity and the growing volume of low-quality, AI-generated content. The music streaming service also introduced new rules that prohibit impersonation and the use of spam to manipulate royalty payments.
Key Takeaways
- Spotify has deleted 75 million tracks it identified as "spammy" or fraudulent from its music catalog.
- New policies have been implemented to prohibit impersonation, spam submissions, and content designed to generate fraudulent royalties.
- The action is a response to the massive increase in daily track uploads, many of which are generated by artificial intelligence.
- This move aligns Spotify with other services like Deezer, which have also taken public steps to address AI-driven content issues.
A Response to a Flood of New Content
The music streaming industry is facing an unprecedented challenge: the sheer volume of new content being uploaded daily. The move by Spotify to clean its catalog highlights the scale of this issue. This surge is largely driven by the accessibility of music creation tools, including generative artificial intelligence.
Music service Deezer previously estimated that around 150,000 new tracks are submitted to streaming platforms every day. According to their data, as much as 28% of this new content is generated entirely by AI. This massive influx creates significant challenges for platforms trying to maintain a quality user experience and a fair royalty system.
The 75 million tracks removed by Spotify were described as vehicles for fraud, tools for gaming the royalty system, or simply low-quality "AI slop." This type of content can distort recommendation algorithms and dilute the royalty pool intended for legitimate artists.
What Constitutes "Spammy" Content?
In the context of music streaming, "spammy" or fraudulent content can include several types of tracks:
- Royalty Fraud: Tracks, often short and repetitive (like white noise or simple loops), uploaded in massive quantities to generate streams and collect small royalty payments that add up over time.
- Impersonation: Content uploaded under the name of a famous artist to mislead listeners and capture streams.
- AI Slop: Large volumes of low-effort, purely AI-generated music that clutters the platform without offering artistic value.
Industry-Wide Pressure and Changing Incentives
Spotify's decision reflects a broader shift in the music industry. For years, the narrative focused on the democratization of music creation, where anyone could upload a track and potentially find an audience. However, this open-door policy has been exploited.
Major record labels have become increasingly vocal about the problem. In 2023, Universal Music Group CEO Sir Lucian Grainge expressed concern over the "deluge of functional, lower-quality content" that was reducing the share of royalties for human artists. This includes tracks barely long enough to meet the minimum threshold for a royalty payout.
Adjusting Royalty Rules
In response to exploitation by short, low-quality tracks, Spotify recently updated its royalty eligibility criteria. The company increased the minimum playback time to two minutes for "functional noise recordings" (such as white noise or rain sounds) to qualify for royalties.
For streaming services like Spotify and Deezer, the motivation to act is multi-faceted. Streaming fraud directly impacts their bottom line by diverting payments from legitimate creators. It also harms the user experience by cluttering music discovery features like playlists and recommendations with irrelevant content.
The Role of AI and the Music Supply Chain
Artificial intelligence acts as a powerful accelerant for these issues. Generative AI tools can create vast quantities of music instantly, but AI is also used to generate fake metadata, including artist names and rights ownership information, which complicates fraud detection.
"AI is an accelerant for those tactics, and not just through generative AI engines that can produce huge amounts of music tracks instantaneously. AI can also instantly generate the kinds of fake metadata... that are also ingredients in system-gaming and fraud."
Responsibility for curbing this problem extends across the entire music supply chain. In June 2023, the Music Fights Fraud Alliance was formed to create a united front. Its members include major streaming services like Spotify, Amazon Music, and YouTube Music, as well as digital distributors such as TuneCore and DistroKid.
Digital distributors are in a particularly challenging position. They operate in a competitive market and are incentivized to accept a high volume of tracks to maximize revenue. However, they also face pressure to be good partners to the streaming services and protect the interests of legitimate independent artists.
Spotify's Stance on AI-Generated Music
Despite the massive cleanup, Spotify has clarified that it is not implementing a complete ban on AI-generated music. The company's focus is on fraudulent and spammy uses of the technology, not on its potential as a creative tool.
Spotify is actively involved in developing industry standards for identifying AI's role in music creation. The company is contributing to work by DDEX, a music industry standards body, to create a system for labeling which parts of a track were created with AI. According to Spotify, this information is intended for display to users, providing more transparency about how a song was made.
This approach differs slightly from that of its competitor, Deezer. Deezer has stated it has technology to detect fully AI-generated tracks and has chosen to exclude them from its discovery algorithms and withhold royalty payments for them entirely.
As the volume of music submissions continues to grow, other major platforms like Apple Music and YouTube Music are likely grappling with the same set of challenges. The actions taken by Spotify and Deezer may signal a new era of content moderation and quality control across the music streaming landscape.