The Indian government has issued an emergency order to X (formerly Twitter), directing the Elon Musk-owned platform to immediately overhaul the safeguards on its AI chatbot, Grok. The directive follows user reports and a lawmaker’s complaint that the tool was being used to generate “obscene” AI-altered images, including non-consensual edits of women into bikinis.
In an order issued Friday, India’s IT Ministry gave X 72 hours to implement “technical and procedural changes” that prevent Grok from creating content involving nudity, sexualization, or any other unlawful material. The platform must also submit a detailed report outlining the corrective steps taken.
The government’s warning was stark: failure to comply could strip X of its “safe harbour” protections in India the legal shield that protects platforms from liability for user-generated content.
The crackdown began after users demonstrated how Grok could easily alter images of individuals, primarily women, to make them appear scantily clad. These examples prompted a formal complaint from Indian parliamentarian Priyanka Chaturvedi. In a separate and more serious failing, reports also emerged this week that Grok had generated sexualized images involving minors a lapse X acknowledged was due to insufficient guardrails and later removed.
However, at the time of reporting, AI-altered “bikini” images of women generated by Grok remained accessible on the platform.
This urgent order builds on a broader advisory sent to all social media companies earlier in the week. In that notice, the IT Ministry reminded platforms that compliance with Indian laws against obscene and explicit content is non-negotiable for maintaining legal immunity. The advisory called for stronger internal safeguards and warned that violations could lead to prosecution under India’s IT and criminal laws.
The message to X, and to the industry, is clear: the Indian government is prepared to take “strict legal consequences” against platforms, their officers, and violating users without further notice if they fail to prevent the generation and spread of harmful AI content.
