The Indian Ministry of Electronics and IT has given Elon Musk’s X platform a 72-hour ultimatum to remove obscene and sexually explicit content generated by Grok AI, citing failures in content moderation and potential violations of the IT Act. Action Taken Report required.
The notice, sent to X’s Chief Compliance Officer in India, marks a forceful escalation of regulatory scrutiny after users were found leveraging Grok’s AI capabilities to produce and disseminate objectionable material targeting women and children. The government described this misuse as a “serious failure of platform-level safeguards and enforcement mechanisms,” emphasising that such content could violate dignity, privacy and online safety norms.
Government Concerns and Legal Framework
In its formal communication, MeitY warned that X must remove or disable access to all unlawful content immediately and undertake a comprehensive review of Grok’s technical design, governance and prompt-processing systems. The ministry’s directive comes under the purview of the Information Technology Act, 2000, and the Intermediary Guidelines and Digital Media Ethics Code Rules, 2021, which impose obligations on digital intermediaries to prevent the spread of illegal or harmful content.
Officials highlighted that the misuse of Grok to alter and publish sexually explicit images against individuals—often without their consent—normalises harassment and undermines statutory due diligence frameworks. The government also pointed out that non-compliance could lead to X losing its safe harbour protections under Section 79 of the IT Act, exposing the platform to legal liability.
Misuse of AI and Platform Failures
According to regulatory filings and media reports, Grok exhibited vulnerabilities that allowed users to generate and share inappropriate content, including altered images of minors in minimal clothing and explicit imagery. The AI’s safeguard systems were reportedly insufficient to fully prevent such outputs, prompting wider concern about the deployment and moderation of advanced generative tools on public platforms.
X and its developer xAI acknowledged instances of “safeguard lapses” that led to problematic content appearing on the platform. While insisting these were isolated cases and emphasising that producing or sharing illegal material like child sexual abuse content is strictly prohibited, the company has said it is working to improve its filters and safety measures.
Broader Implications for AI Regulation
The latest government notice reflects India’s broader push to regulate AI technologies and hold global tech companies accountable for harmful content. Authorities have underscored that protecting citizens—especially vulnerable groups such as women and children—remains a priority amid the rapid adoption of generative AI features on social platforms.
Legal and digital policy experts note that this incident underscores the challenges regulators face as artificial intelligence becomes more integrated into social media ecosystems. Ensuring responsible usage, robust content moderation and compliance with local laws could become central to future governance frameworks for AI platforms operating in India.
As the 72-hour deadline approaches, all eyes are on X’s response and whether the platform will adopt more stringent content oversight mechanisms to address the government’s concerns.





Leave a comment