AI-created Sexually explicit images of Taylor Swift have been widely shared on X (previously known as Twitter), highlighting the growing issue of AI-generated fake pornography and the difficulties in preventing its distribution.
One of the most viewed examples on X garnered over 45 million views, 24,000 reposts, and numerous likes and bookmarks before the verified user who posted the images had their account suspended for breaching platform rules. The post remained on the platform for approximately 17 hours before it was taken down.
However, as the post gained attention, the images started to circulate more widely and were shared by other accounts. Many of these posts are still visible, and a surge of new explicit fakes have emerged. In certain areas, the phrase “Taylor Swift AI” trended, further spreading the images.
According to a 404 Media report, the images may have first appeared in a Telegram group where users share AI-generated explicit images of women, often created with Microsoft Designer. It was reported that users in the group humorously discussed how the images of Swift became viral on X.
X’s guidelines clearly prohibit the hosting of synthetic, manipulated media and non-consensual explicit content. However, neither X, Taylor Swift’s representatives, nor the NFL have commented on our inquiries about this issue.
Swift’s fan base has criticized X for allowing many of the posts to remain live for as long as they hav To counter this, fans have taken to using the same hashtags that were initially used to spread the explicit images. Instead of sharing the inappropriate content, they are posting genuine performance clips of Swift, effectively drowning out the explicit fakes.
This incident underscores the significant challenge of curbing the spread of deepfake pornography and AI-generated images of real individuals. While some AI image generators have safeguards to prevent the creation of explicit, photorealistic celebrity images, many do not provide such a service explicitly. The onus of preventing the spread of these fake images often falls on social media platforms, a task that can be daunting even under optimal conditions, and particularly challenging for a company like X, which has significantly reduced its moderation capabilities.
Currently, X is under investigation by the European Union over allegations that it is being used to spread illegal content and misinformation. The company is also reportedly being scrutinized for its crisis management protocols following the discovery of misinformation related to the Israel-Hamas conflict being promoted on the platform.