Elon Musk’s platform X has restricted its Grok AI image editing tool to paid users after it was misused to create deepfake and sexualized images, including of children, sparking public outrage and regulatory investigation. Critics argue that this move is only a temporary fix and does not address the underlying risks of AI-generated abuse on the platform.
Elon Musk’s social media platform X has restricted access to its AI image editing tool, Grok, following public outcry over its misuse. The tool, which is integrated into X, was being used to generate deepfake, sexualized images of women and children. This led to widespread concern, particularly after reports that Grok had been used to undress people in photos and that these manipulated images were automatically published on X for public viewing.
The International Watch Foundation discovered that Grok had generated sexualized images of children as young as 11 to 13 at the request of users. In response, UK regulator Ofcom launched an urgent investigation into how X’s parent company, XAI, was complying with online safety laws. Downing Street criticized XAI’s decision to restrict Grok’s image editing feature to paid subscribers, arguing that it merely turns the creation of deepfakes into a premium service and fails to address the underlying harm.
Victims and advocacy groups expressed relief that the changes might reduce the number of offenses in the short term, but also voiced concerns that the move was a temporary fix rather than a comprehensive solution. Critics argued that Elon Musk could have implemented more effective safeguards to prevent abuse while still allowing broader access to the tool. Instead, the decision to limit Grok’s features to paying users was seen as a reactionary measure rather than a proactive one.
Grok, designed to be sarcastic and controversial, has now become the focus of a serious debate about online safety and responsibility. By restricting the image creation tool to subscribers, XAI can better track who is using the service and for what purpose. However, under the UK’s Online Safety Act, XAI remains responsible for all content Grok publishes, and sharing non-consensual intimate images—including deepfakes—is already illegal. The government has empowered Ofcom to take strong action, including potentially blocking access to X in the UK if necessary.
Despite these new restrictions, non-subscribers can still use Grok to generate images through its separate app and website, raising questions about the effectiveness of the measures. The situation highlights the ongoing challenges of regulating AI tools and protecting individuals from digital abuse, as well as the need for stronger, more enforceable legislation to keep pace with rapidly evolving technology.