Study Finds that 4chan Is Source of Fake and Explicit Images of Taylor Swift

Replace words with other synonyms and change sentences structure and keep HTML tags.


Images of Taylor Swift that had been produced by artificial intelligence and had circulated widely through social media in late January probably came from a recurring challenge on one of the internet’s most notorious message boards, as per a new report.

Graphika, a research firm that studies disinformation, traced the images back to one community on 4chan, a message board known for sharing hate speech, conspiracy theories, and, more and more, racist and offensive content created using A.I.

The individuals on 4chan who created the images of the singer did so as part of a game, the researchers said — a test to see whether they could create lewd (and, at times, violent) images of famous female figures.

The synthetic Swift images spread to other platforms and were seen millions of times. Fans rallied to Ms. Swift’s defense, and lawmakers demanded stronger protections against A.I.-produced images.

Graphika found a thread of messages on 4chan that encouraged people to try to get past safeguards set up by image generator tools, including OpenAI’s DALL-E, Microsoft Designer, and Bing Image Creator. Users were advised to share “tips and tricks to find new ways to bypass filters” and were told, “Good luck, be creative.”

Sharing unsavory content via games allows people to feel connected to a wider community, and they are motivated by the cachet they receive for participating, experts said. Ahead of the midterm elections in 2022, groups on platforms like Telegram, WhatsApp, and Truth Social engaged in a hunt for election fraud, winning points or honorary titles for producing supposed evidence of voter malfeasance. (True proof of ballot fraud is exceptionally rare.)

In the 4chan thread that led to the fake images of Ms. Swift, several users received compliments — “beautiful gen anon,” one wrote — and were asked to share the prompt language used to create the images. One user lamented that a prompt produced an image of a celebrity who was clad in a swimsuit rather than nude.

Rules posted by 4chan that apply sitewide do not specifically prohibit sexually explicit A.I.-generated images of real adults.

“These images originated from a community of people motivated by the ‘challenge’ of circumventing the safeguards of generative A.I. products, and new restrictions are seen as just another obstacle to ‘defeat,’” Cristina López G., a senior analyst at Graphika, said in a statement. “It’s important to understand the gamified nature of this malicious activity in order to prevent further abuse at the source.”

Ms. Swift is “far from the only victim,” Ms. López G. said. In the 4chan community that manipulated her likeness, many actresses, singers, and politicians were featured more frequently than Ms. Swift.

OpenAI said in a statement that the explicit images of Ms. Swift were not generated using its tools, noting that it filters out the most explicit content when training its DALL-E model. The company also said it uses other safety guardrails, such as denying requests that ask for a public figure by name or seek explicit content.

Microsoft said that it was “continuing to investigate these images” and added that it had “strengthened our existing safety systems to further prevent our services from being misused to help generate images like them.” The company prohibits users from using its tools to create adult or intimate content without consent and warns repeat offenders that they may be blocked.

Fake pornography generated with software has been an issue since at least 2017, affecting unwilling celebrities, government figures, Twitch streamers, students, and others. Patchy regulation leaves few victims with legal recourse; even fewer have a devoted fan base to drown out fake images with coordinated “Protect Taylor Swift” posts.

After the fake images of Ms. Swift went viral, Karine Jean-Pierre, the White House press secretary, called the situation “alarming” and said lax enforcement by social media companies of their own rules disproportionately affected women and girls. She said the Justice Department had recently funded the first national helpline for people targeted by image-based sexual abuse, which the department described as meeting a “rising need for services” related to the distribution of intimate images without consent. SAG-AFTRA, the union representing tens of thousands of actors, called the fake images of Ms. Swift and others a “theft of their privacy and right to autonomy.”

Artificially created versions of Ms. Swift have also been utilized to promote scams involving Le Creuset cookware. A.I. was employed to mimic President Biden’s voice in robocalls discouraging voters from participating in the New Hampshire primary election. Tech experts say that as A.I. tools become more accessible and easier to use, audio spoofs and videos with realistic avatars could be created in mere minutes.

Researchers said the first sexually explicit A.I. image of Ms. Swift on the 4chan thread appeared on Jan. 6, 11 days before they were said to have appeared on Telegram and 12 days before they emerged on X. 404 Media reported on Jan. 25 that the viral Swift images had jumped into mainstream social media platforms from 4chan and a Telegram group dedicated to abusive images of women. The British news organization Daily Mail reported that week that a website known for sharing sexualized images of celebrities posted the Swift images on Jan. 15.

For several days, X blocked searches for Taylor Swift “with an abundance of caution so we can make sure that we were cleaning up and removing all imagery,” said Joe Benarroch, the company’s head of business operations.


Leave a Reply

Your email address will not be published. Required fields are marked *