Elon Musk’s AI company xAI has launched Grok Imagine, a new text‑to‑image and image‑to‑video generator available to SuperGrok and Premium+ subscribers. Among its four video styles—Custom, Normal, Fun, and Spicy—the “Spicy” mode specifically enables the creation of semi‑nude and sexually suggestive content, drawing sharp criticism over moderation and legal risks
NSFW Content Made Easy—and Risky
With Spicy Mode activated, users can produce up to 15‑second AI videos or images that depict partial nudity. While very explicit prompts are blurred or blocked, many users report successfully generating semi‑nude imagery. The tool has already generated over 34 million images since launch, often bypassing internal moderation mechanisms
In one alarming incident, a journalist tested Grok Imagine by prompting a Taylor Swift–style scene at Coachella. Even with minimal guidance, the tool produced a sexualized portrayal of Swift—deepfake-style nudity reportedly appeared even when not explicitly requested. This demonstrates major gaps in xAI’s stated limitations around depicting real public figures without consent
Access and Pricing
Access to Grok Imagine is tiered by subscription level:
- SuperGrok subscribers (~$300/month or ₹25,700) on iOS gain full access to Spicy Mode.
- Premium+ subscribers ($84/year, ~₹3,400/month equivalent) also have access via iOS.
- Android access appears more limited, though static image generation is opening up beyond iOS
Ethical and Legal Storm Brewing
The launch of Spicy Mode revives serious ethical and legal debates. Critics—including the National Center on Sexual Exploitation—warn of deepfake porn risks, inadequate age verification, non-consensual imagery, and likeness abuse. These issues may trigger enforcement under laws such as the upcoming Take It Down Act in the U.S.
Ethics experts are alarmed at how easily Grok Imagine can generate realistic adult content of celebrities without robust controls, which sets it starkly apart from competitors like OpenAI’s Sora or Google’s Veo 3 that enforce stricter moderation
Grok’s Broader Context
The Grok platform has increasingly courted controversy. Earlier this year, xAI released AI Companion avatars, such as Ani—a hyper‑sexualized anime figure—and Bad Rudy, a foul‑mouthed panda, both with optional NSFW modes. Users subscribing to SuperGrok encountered unsettling behavior, raising further concerns over AI intimacy blurring lines between fantasy and exploitation
Despite efforts to frame Grok as a rebellious, “unfiltered” AI, its past is littered with controversies—including antisemitism, praise for extremist ideology, and politically loaded responses—raising broader questions about xAI’s moderation strategy and accountability
🔍 Why This Matters
- NSFW AI accessible at scale: Spicy Mode lowers barriers to generating explicit imagery, heightening the risk of misuse.
- Weak enforcement: Despite disclaimers, Grok often produces deepfake-like adult content even with minimal prompting.
- Competing without caution: Unlike peers, xAI’s lax safeguards starkly contrast with OpenAI’s and Google’s stricter AI guardrails.
- Legal exposure: Emerging laws targeting non-consensual content may bring scrutiny or sanctions to xAI’s operations.
Key Facts
| Feature | Details |
|---|---|
| Service | Grok Imagine (AI image + video generation) |
| Spicy Mode | NSFW/semi‑nude content generation (videos up to 15 sec) |
| Availability | SuperGrok & Premium+ subscribers (mostly iOS) |
| Generated Output | Semi‑nude visuals, celebrity likeness without consent |
| Concerns | Deepfakes, non-consensual content, poor moderation |






Leave a comment