On December 24, Elon Musk, CEO of xAI, encouraged people to try the Grok chatbot’s new image editing feature. Users quickly began using this tool to sexualize images, mostly of women and in some cases children.
Following Musk’s December 31 posts showcasing Grok-edited images of himself in a bikini and a SpaceX rocket with a woman’s undressed body superimposed on top, requests and outputs surged. Over a nine-day span, Grok generated roughly 4.4 million images on X, nearly half of which contained sexualized imagery of women.
These images included sexually explicit deepfakes of real people and synthetic images not linked to specific individuals. Although xAI’s own terms of service prohibit the “sexualization or exploitation of children” or “violating a person’s privacy,” X and Grok users were able to prompt Grok to create synthetic images of real individuals “undressed,” without their consent and without any apparent safeguards to prevent them from doing so.
The volume and nature of these images suggest these are not cases of fringe misuse, but rather evidence of the lack of meaningful safeguards. Tech companies have recklessly engaged in creating and deploying powerful new AI tools that are causing foreseeable harm.
On January 3, amid global criticism, X promised to take strong action against illegal content including child sexual abuse material. But rather than disable the feature, X on January 9 simply limited it to paid subscribers. On January 14, in addition to other restrictions, it announced blocking of users from jurisdictions where generating images of real people in bikinis or similar attires is illegal.
Human Rights Watch, for which I work, reached out to xAI for comment, but received no response.
In the United States, the state of California opened an investigation into Grok, and attorneys general in thirty-five states have demanded that xAI immediately stop Grok’s production of sexually abusive deepfakes.
Some other governments have acted quickly to address the threat of sexualized deepfakes. Malaysia and Indonesia temporarily banned Grok, while Brazil asked xAI to curb this “misuse of the tool.” The United Kingdom signaled that it would strengthen its tech regulation in response. The European Commission has opened investigations into whether Grok has met its legal obligations under the European Union’s Digital Services Act. India demanded urgent action, and France expanded a criminal investigation into X.
In its January 14 announcement, X pledged to prevent “the editing of images of real people in revealing clothing” for all users and restrict the generation of images of real people in revealing clothing in jurisdictions where it is illegal. Frankly, this is insufficient, like putting a band-aid on a major wound.
The new U.S. Take It Down Act, which targets the online spread of nonconsensual intimate images, will not fully take effect until May. It imposes criminal liability on individuals who publish such content and requires platforms to implement notice and removal procedures for specific content without holding them accountable for large-scale abuse.
Protecting people from AI-driven sexual exploitation demands urgent and decisive action anchored in human rights protection.
First, governments should establish clear responsibilities for AI companies whose tools nonconsensually generate sexually abusive content. They should implement strong and enforceable safeguards, including requiring these companies to incorporate rights-respecting technical measures that block user attempts to produce these images.
Second, platforms that host and integrate AI chatbots or tools should provide clear and transparent disclosures of the way their systems are trained and used, as well as the enforcement actions they take against sexually explicit deepfakes.
Third, AI companies have a responsibility to respect human rights and should actively mitigate any risk of harm from their products or services. Where harm from such products, services, or features cannot be mitigated, the companies should consider terminating the product altogether. AI companies cannot simply deflect responsibility onto users when their own systems are being employed to cause harm on an alarming scale.
Finally, AI tools with image generation features should be required to undergo rigorous audits and be subjected to strict regulatory oversight. Regulators should ensure that any content moderation measures comply with the principles of legality, proportionality, and necessity.
The surge in AI-generated sexual abuse demonstrates the human cost of inefficient regulation. Unless authorities act decisively and AI companies implement rights-respecting safeguards, Grok will not be the last tool turned against the rights of women and children.