Posted on Jan 30, 2024
Explicit, AI-generated Taylor Swift images continue to proliferate on X, Instagram and Facebook
1.06K
12
2
6
6
0
Posted 10 mo ago
Responses: 2
PO1 William "Chip" Nagel good day Brother William, always informational and of the most interesting. Thanks for sharing, have a blessed day!
(4)
(0)
I saw a video someone brought up a really good point how the federal government hasn't seemed to take this seriously until it happened to Swift. She's not the first person who has had explicit AI generated images of her. This has been happening for awhile - and there are even AI image generators producing thousands of images of child sexual abuse: https://www.pbs.org/newshour/science/study-shows-ai-image-generators-are-being-trained-on-explicit-photos-of-children
"Until recently, anti-abuse researchers thought the only way that some unchecked AI tools produced abusive imagery of children was by essentially combining what they’ve learned from two separate buckets of online images — adult pornography and benign photos of kids.
But the Stanford Internet Observatory found more than 3,200 images of suspected child sexual abuse in the giant AI database LAION, an index of online images and captions that’s been used to train leading AI image-makers such as Stable Diffusion. The watchdog group based at Stanford University worked with the Canadian Centre for Child Protection and other anti-abuse charities to identify the illegal material and report the original photo links to law enforcement."
That's from Nov 2023. This is from Dec 2023: https://www.nbcdfw.com/news/national-international/parents-and-lawmakers-are-pushing-for-protections-against-ai-generated-nude-images/3402183/
This is from Dec 2023: https://www.msnbc.com/opinion/msnbc-opinion/ai-generated-nudes-new-jersey-students-rcna123931
And it's been happeining to women before it was in these articles...but I guess it takes a famous person to be targeted to get the government to do anything about it...
"Until recently, anti-abuse researchers thought the only way that some unchecked AI tools produced abusive imagery of children was by essentially combining what they’ve learned from two separate buckets of online images — adult pornography and benign photos of kids.
But the Stanford Internet Observatory found more than 3,200 images of suspected child sexual abuse in the giant AI database LAION, an index of online images and captions that’s been used to train leading AI image-makers such as Stable Diffusion. The watchdog group based at Stanford University worked with the Canadian Centre for Child Protection and other anti-abuse charities to identify the illegal material and report the original photo links to law enforcement."
That's from Nov 2023. This is from Dec 2023: https://www.nbcdfw.com/news/national-international/parents-and-lawmakers-are-pushing-for-protections-against-ai-generated-nude-images/3402183/
This is from Dec 2023: https://www.msnbc.com/opinion/msnbc-opinion/ai-generated-nudes-new-jersey-students-rcna123931
And it's been happeining to women before it was in these articles...but I guess it takes a famous person to be targeted to get the government to do anything about it...
Study shows AI image-generators are being trained on explicit photos of children
The Stanford Internet Observatory found more than 3,200 images of suspected child sexual abuse in a database used to train leading AI image-makers.
(0)
(0)
Read This Next