(EXCLUSIVE) Taylor Swift Outraged Over AI-Generated Images, Contemplates Legal Action Against Deepfake Porn Site’ The door needs to be shut on this’

Taylor Swift is reportedly outraged over the circulation of explicit AI-generated images of her online and is contemplating taking legal action against the distasteful deepfake pornography site hosting them, according to DailyMail.com.

The singer has become the latest victim of the website, which openly violates state pornography laws and continues to evade cybercrime units.

This week, Celeb Jihad, the website in question, uploaded numerous explicit images depicting Swift engaged in sexual acts, dressed in Kansas City Chief gear and located in the stadium. Swift has frequently attended Chiefs games since her relationship with star player Travis Kelce became public.

The images were quickly disseminated across X, Facebook, Instagram, and Reddit. X and Reddit began to remove the posts on Thursday morning after usnewsdaily.net notified them about some of the accounts.

A source close to Swift stated on Thursday that a decision on whether to take legal action is still pending. However, one thing is clear: these fake AI-generated images are abusive, offensive, exploitative, and created without Swift’s consent or knowledge.

The Twitter account that originally posted the images no longer exists. It’s shocking that the social media platform allowed them to be posted in the first place.

These images need to be removed from all platforms where they exist and should not be promoted by anyone.

Swift’s family, friends, and fans are understandably furious. They, and indeed all women, have the right to be.

‘The door needs to be shut on this. Legislation needs to be enacted to prevent this, and laws must be enforced.

These reprehensible sites operate openly, seemingly shielded by proxy IP addresses.

An analysis by independent researcher Genevieve Oh, shared with The Associated Press in December, revealed that over 143,000 new deepfake videos were posted online this year, surpassing the total number from all previous years combined.

The issue is further aggravated by social media trolls who repost these images. A spokesperson for Meta informed usnewsdaily.net today that such content breaches their policies. They are removing it from their platforms and taking action against the accounts responsible for posting it. They will continue to monitor for any additional violating content and take appropriate action.

Democratic Congressman Joe Morelle recently proposed a bill to ban such websites, joining other lawmakers in condemning the practice. Congressman Tom Kean, Jr. expressed concern that AI technology is advancing faster than the necessary safeguards. He emphasized the need for protective measures against this alarming trend, whether the victim is Taylor Swift or any young person nationwide.

His bill, the AI Labeling Act, which he co-sponsors, would be a significant step forward, according to the New Jersey Congressman. There are increasing demands for the website to be shut down and its operators to face criminal investigation.

On Thursday morning, X began suspending accounts that had reshared some of the images. However, new accounts quickly took their place, and the images were also reposted on Instagram, Reddit, and 4Chan.

Swift has not yet commented on the website or the spread of the images, but her dedicated and upset fans have taken action. They question why this is not considered sexual assault and express their discomfort and unease with the situation.

A fan expressed concern on Twitter about the use of a woman’s likeness in a way she would likely not approve of, questioning the lack of laws or regulations to prevent such actions.

In several states including Texas, Minnesota, New York, Virginia, Hawaii, and Georgia, non-consensual deepfake pornography is illegal. In Illinois and California, victims can legally sue the creators of such content for defamation.

One frustrated fan of Swift urged the entire adult fan community to report all AI-generated explicit images of the singer on Twitter. Other fans expressed their disgust and called for the creators of these images to face legal consequences.

The issue of explicit AI-generated content, which predominantly harms women and children, is growing at an alarming rate online. Families affected by this issue are urging lawmakers to establish strong protections for victims whose images are manipulated using new AI technologies or through various apps and websites that openly offer these services.

Advocates and some legal experts are also pushing for federal regulations that can provide consistent protections nationwide and send a strong message to those who are currently creating or planning to create such content.

The problem with deepfakes is not new, but experts warn that it’s becoming more severe as the technology to create them becomes more accessible and user-friendly.

This year, researchers have raised concerns about the surge in AI-generated child sexual abuse material, which uses images of real victims or virtual characters.

In June 2023, the FBI issued a warning that it continues to receive reports from victims, both minors and adults, whose photos or videos were used to create explicit content that was then shared online.

“We are talking about the body/face of a woman being used for something she probably would never allow/feel comfortable. How are there no regulations or laws preventing this?,” A fan expressed concern on Twitter.

In several states including Texas, Minnesota, New York, Virginia, Hawaii, and Georgia, non-consensual deepfake pornography is illegal. In Illinois and California, victims can legally sue the creators of such content for defamation.

“I’m gonna need the entirety of the adult Swiftie community to log into Twitter, search the term ‘Taylor Swift AI,’ click the media tab, and report every single AI generated pornographic photo of Taylor that they can see because I’m f***ing done with this BS. Get it together Elon,” one enraged Swift fan wrote.

“Man, this is so inappropriate,” another wrote. While another said: “Whoever is making those Taylor Swift AI pictures is going to hell.”

“Whoever is making this garbage needs to be arrested. What I saw is just absolutely repulsive, and this kind of s**t should be illegal… we NEED to protect women from stuff like this,” another person added.

The issue of explicit AI-generated content, which predominantly harms women and children, is growing at an alarming rate online. Families affected by this issue are urging lawmakers to establish strong protections for victims whose images are manipulated using new AI technologies or through various apps and websites that openly offer these services.

Advocates and some legal experts are also pushing for federal regulations that can provide consistent protections nationwide and send a strong message to those who are currently creating or planning to create such content.

The problem with deepfakes is not new, but experts warn that it’s becoming more severe as the technology to create them becomes more accessible and user-friendly.

This year, researchers have raised concerns about the surge in AI-generated child sexual abuse material, which uses images of real victims or virtual characters.

In June 2023, the FBI issued a warning that it continues to receive reports from victims, both minors and adults, whose photos or videos were used to create explicit content that was then shared online.

In addition to existing laws, several states, including New Jersey, are contemplating their own legislation to prohibit deepfake pornography and penalize those who disseminate it, with punishments ranging from fines to imprisonment.

In October, President Joe Biden signed an executive order that, among other things, prohibited the use of AI to create child sexual abuse material or non-consensual ‘intimate imagery’ of real individuals. The order also instructed the federal government to provide guidelines for labeling and watermarking AI-generated content to distinguish it from authentic material.

However, some organizations, including the American Civil Liberties Union, the Electronic Frontier Foundation, and The Media Coalition, which represents trade groups from publishers and movie studios, have urged caution. They argue that any proposed measures need to be carefully considered to avoid infringing on First Amendment rights.

Joe Johnson, an attorney for the ACLU of New Jersey, suggested that existing cyber harassment laws could address some concerns about abusive deepfakes. He emphasized the need for extensive discussions and stakeholder input to ensure any proposed legislation is not overly broad and effectively addresses the identified issue.

Mani revealed that her daughter has established a website and a charity aimed at assisting victims of AI. They have been discussing with state legislators who are advocating for the New Jersey bill and are planning a trip to Washington to lobby for additional protections.

Mani expressed concern that not every child, regardless of gender, will have the necessary support system to cope with this issue, and they may struggle to see a way out of the situation.

Leave a Comment