...
After having been questioned in the past for antisemitic responses and other controversial episodes, Grok, the artificial intelligence chatbot integrated into the social network X—owned by tycoon Elon Musk—is once again at the center of a controversy that this time involves a negative reaction from multiple governments.
Now the chatbot is accused of allowing the generation of sexualized images of real people, creating deepfakes of them in bikinis or scant clothing, based on original photographs. The issue has generated a wave of backlash.
X must address the problem “urgently,” said UK Technology Secretary Liz Kendall yesterday, adding that the content was “absolutely appalling and unacceptable in a decent society. No one should go through the experience of being deepfaked.” For several days, officials from multiple countries have demanded investigations.
The problem arose following the launch of Grok Imagine, an AI image and video generator that includes a so-called “spicy mode” with fewer restrictions. Last Friday, non-consensual images of real people in bikinis or underwear began circulating on social media.
The non-profit group AI Forensics analyzed 20,000 such photos created by Grok and found that 2% corresponded to someone who appeared to be under 18 years old, including girls.
The controversy amplified as more examples came to light, and on Monday, the debate took a significant leap. The European Commission said it was reviewing the complaints “very seriously.”
“This is not edgy. It is illegal. It is appalling. It is disgusting (…) This has no place in Europe,” stated Thomas Regnier, spokesperson for the European Commission on digital matters.
The French Prosecutor’s Office, which has been investigating X since 2023, said it will now include charges for the generation and dissemination of child pornography.
In Brazil, legislator Erika Hilton reported Grok and X to the federal public prosecutor’s office and the country’s data protection agency, commenting that the functions should be deactivated until an investigation is initiated.
In India, the Ministry of Information Technology gave the tycoon 72 hours on Friday to correct the system, but as of yesterday, there were no responses.
On the other hand, the mother of one of Elon Musk’s children, writer and political strategist Ashley St Clair, said she was “horrified” after fans of the billionaire used Grok to create fake sexualized images of her, as reported by The Guardian yesterday.
Local experts agree that the controversy reignites the debate on the ethical limits of these tools and their ability to prevent abusive uses.
For Gonzalo Álvarez, director of Tech-Law and academic at U. Central, the controversy “has to do with a real and structural challenge associated with the expansion of generative AI, which integrates image editing and can generate abuse or non-consensual sexualization.”
In his view, the issue should not be interpreted merely as “misuse” by users, but rather “a central issue is the investment developers must make in algorithm control.”
Luis Enrique Santana, a communications academic at UAI, holds a similar view, explaining that “chatbots do not decide on their own what is acceptable. That is defined by the limits companies impose on them.”
According to Santana, “in other widely used chatbots, there are clearer restrictions,” especially “when real people are involved.”
The difference with Grok, he asserts, “is not technical, but normative and strategic. While some developers opt for conservative limits, others take greater risks in the name of creative freedom, engagement, or profitability, even when negative impacts are foreseeable.”
“It is not just about misuse by users, but a failure in the risk control of generative artificial intelligence systems.” — GONZALO ÁLVAREZ, Director of Tech-Law and Academic at U. Central
The new controversy adds to a series of previous inquiries. In July of last year, the chatbot was criticized for generating responses with antisemitic content and laudatory references to Nazism, which forced the company to suspend it temporarily. X attributed the problem to temporary glitches.
A month later, the AI accused its creator of “censoring” it. After being suspended, the system responded to user queries indicating that “freedom of expression was put to the test.”
For local specialists, scandals like the one facing Musk’s platform should not be interpreted only as isolated excesses. “But as signals of a broader problem of platform governance and generative AI regulation,” suggests Santana.
“The challenge is to advance towards a balance that allows innovation, but within clear regulation, where rights—especially those of women and minors—are not subordinated to market logic or the promise of ‘fixing it later’,” he concludes.