Now Reading
AI Identity Theft has Hit Spotify  

AI Identity Theft has Hit Spotify  

The other day I was on Spotify when a new release popped up from Chelsea Jordan—an artist I have been following for some time. I clicked immediately. I didn’t know she was putting out new music. 

Halfway through the song, something felt off. It wasn’t terrible: it sounded like her but the phrasing was strange. Trying to figure out whether she had shifted her style or whether I was imagining it, I went to her TikTok and found out she hadn’t released anything at all. Someone had used AI to imitate her voice and upload songs under her name. Not one song, but multiple—the fake Chelsea Jordan had amassed about 30,000 monthly listeners. Jordan had pinned a video explaining the crisis, and warning her followers. What unsettled me most wasn’t just the existence of the fake tracks, it was that the profile had stolen her entire name, already racked up listeners, and was getting paid for AI-generated deepfakes of her voice.

This is identity theft. AI didn’t just copy her sound, it stole her identity. It attached itself to her name, her audience and probably made money doing so. As a listener, I felt strangely complicit. I had given attention to something that was essentially an impersonation. There was no label warning me. No disclosure in the title. So now I’m asking a question: how much responsibility do streaming platforms have to protect both artists and consumers from this. If I can no longer assume a song is actually performed by the artist, then what kind of world are we living in?

There is a difference between experimentation and deception. If a track is AI-influenced or fully AI-generated, especially if it mimics a real artist’s voice, that should be transparent. It should be clearly flagged in the title, and listeners should have the option to filter AI songs from their recommendations and algorithms. Spotify already has a community petition to mark or disable AI-generated tracks with over 10K signatures, you can sign it here: https://community.spotify.com/t5/Live-Ideas/Mark-Disable-AI-Generated-Songs/idi-p/6641329 

Right now, it feels like the burden is on artists to chase down impersonators. And the burden is on listeners to detect what sounds “off.” because of Spotify’s ineffective policies. In September 2025, Spotify released a press article on how they are protecting artists—it revealed that In the past year alone, Spotify has removed over 75 million spam tracks, a surge it directly links to the explosion of generative AI tools. The platform has only recently introduced clearer rules banning unauthorized AI voice cloning and impersonation. Yet it has not disclosed how many artists have had their identities hijacked, what the average takedown time is,  or how much revenue was diverted before those tracks were taken down. They have suggested clearer policy shifts, but this is not enough. 

As listeners, I think we have to demand transparency, verified releases, and support creators directly. But realistically, when a song pops up on a recommended playlist, I am not going to fact-check it. One can only hope that Spotify rolls out a simple mark-or-filter button soon . 

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
View Comments (0)

Leave a Reply

Your email address will not be published.

© 2021 Bacchanal Special Events. All Rights Reserved.