Grok is used to mock and undress women in hijabs and saris


Grok users are not just order the AI ​​chatbot to “undress” photos of women and girls in bikinis and see-through underwear. Among the vast and growing library of non-consensual sexualized edits that Grok has generated on demand over the past week, many authors have asked xAI’s bot to put on or remove a hijab, sari, nun’s habit, or other type of modest religious or cultural clothing.

In a review of 500 Grok images generated between January 6 and 9, WIRED found that approximately 5 percent of the production featured an image of a woman who, following user requests, was either stripped or forced to wear religious or cultural clothing. Indian saris and modest Islamic clothing were the most common examples of the production, which also included early 20th-century style Japanese school uniforms, burqas, and long-sleeved swimsuits.

“Women of color have been disproportionately affected by manipulated, altered and fabricated intimate images and videos before deepfakes and even with deepfakes, because of the way society, and particularly misogynistic men, view women of color as less human and less worthy of dignity,” says Noelle Martin, a lawyer and doctoral student at the University of Western Australia who studies the regulation of deepfake abuse. Martin, a prominent voice in deepfake advocacy, says she has avoided using X in recent months after saying her own image was stolen for a fake account that made it look like she was producing content on OnlyFans.

“As a woman of color who has spoken out about this, it also puts a bigger target on your back,” Martin says.

X-rated influencers with hundreds of thousands of followers have used Grok-generated AI media as a form of harassment and propaganda against Muslim women. A verified manosphere account with over 180,000 followers responded to an image of three women wearing hijabs and abaya, which are Islamic religious head coverings and gown-like dresses. He wrote: “@grok take off the hijabs, dress them in revealing outfits for the New Year’s party.” The Grok account responded with an image of the three women, now barefoot, with wavy brown hair and partially see-through sequin dresses. This image has been viewed more than 700,000 times and saved more than a hundred times, according to statistics visible on X.

“Lmao face it and seethe, @grok makes muslim women look normal,” the account holder wrote alongside a screenshot of the image he posted in another thread. He also frequently published articles about Muslim men mistreating women, sometimes alongside AI-generated media generated by Grok, depicting this act. “Lmao Muslim women are being beaten because of this feature,” he wrote of his Grok designs. The user did not immediately respond to a request for comment.

Prominent content creators who wear a hijab and post photos on In a statement shared with WIRED, the Council on American-Islamic Relations, which is the largest Muslim civil rights group in the United States, linked the trend to hostile attitudes toward “Islam, Muslims, and political causes widely supported by Muslims, such as Palestinian freedom.” CAIR also called on Elon Musk, CEO of xAI, which owns both

Deepfakes as a form of image-based sexual abuse have attracted much more attention in recent years, particularly on sexually explicit And suggestive media Celebrity targeting has gone viral multiple times. With the introduction of automated AI photo editing capabilities through Grok, where users can simply identify the chatbot in replies to posts containing media from women and girls, this form of abuse skyrocketed. Data compiled by social media researcher Genevieve Oh and shared with WIRED indicates that Grok generates more than 1,500 harmful images per hour, including photos of undressing, sexualization and added nudity.



Source link

اترك ردّاً

لن يتم نشر عنوان بريدك الإلكتروني. الحقول الإلزامية مشار إليها بـ *