Where man is not, AI is barren.

Free speech absolutist restricts Grok after global outrage
Home  ➔  News   ➔   Free speech absolutist restricts Grok after global outrage
cyborg xray
xAI moved to restrict Grok’s tools after California launched an investigation into sexualised AI-generated content

PALO ALTO, CA: Elon Musk’s artificial intelligence company xAI said this week it has blocked parts of its AI chatbot Grok from creating or editing sexualized images of real people after mounting regulatory pressure, including a formal investigation by California officials into the tool’s outputs.

The changes, announced late Wednesday on Musk’s social media platform X, come amid global backlash over instances where users reportedly used Grok to generate deepfake images showing people — including women and children — in revealing or sexually suggestive situations without their consent. Governments from Malaysia to the United Kingdom and the European Union have flagged serious safety and legal concerns.

Under the new rules, Grok will “geoblock” the ability to edit photos to depict real people in bikinis, underwear or similar revealing attire in regions where such content is illegal, xAI said. The company also limited image creation and editing features to paid subscribers so that "individuals who attempt to abuse the Grok account to violate the law or our policies can be held accountable."

California Attorney General Rob Bonta opened a probe this week into xAI’s handling of non-consensual sexualized content generated by Grok, saying he wants to determine whether state law has been violated. “We have zero tolerance for the AI-based creation and dissemination of nonconsensual intimate images or of child sexual abuse material,” Bonta said in a statement, urging immediate action.

Musk, who owns both xAI and X, has denied knowledge that Grok was producing explicit images of minors, posting on X: “I not aware of any naked underage images generated by Grok. Literally zero.” He reiterated that the tool only generates images in response to user prompts and is programmed to refuse illegal requests.

Experts and regulators, however, have pushed back, saying limited restrictions do not fully address the scale of the problem and calling for stronger legal and technical safeguards to protect users from harm and harassment facilitated by AI image tools.