A growing coalition of advocacy groups is urging Apple and Google to remove X and its AI tool Grok from their app stores, warning that the technology is being misused to generate nonconsensual sexual deepfakes and other illegal content in violation of platform policies.
Growing concern over the misuse of generative AI tools has intensified scrutiny on major technology platforms, as advocacy organizations warn that X and its integrated AI assistant, Grok, are facilitating the creation and spread of nonconsensual sexual deepfakes. Despite mounting evidence that such activity violates app marketplace rules, both X and Grok remain available on Apple’s App Store and Google Play Store.
A coalition of 28 civil society groups, including prominent women’s organizations and technology accountability advocates, issued formal appeals this week urging Apple CEO Tim Cook and Google CEO Sundar Pichai to take immediate action. The letters argue that the continued distribution of Grok-enabled services undermines existing safeguards designed to prevent AI-generated sexual images, nonconsensual intimate images (NCII), and child sexual abuse material (CSAM).
According to the organizations, Grok has been repeatedly exploited to generate digitally altered images that strip women and minors without consent, a practice described as widespread digital sexual exploitation. The groups contend that this activity represents a direct breach of Apple App Review Guidelines and Google app policies, both of which prohibit content that promotes harm, sexual abuse, or illegal material.
Among the signatories are UltraViolet, the National Organization for Women, Women’s March, MoveOn, and Friends of the Earth. These groups emphasize that warnings about Grok’s capacity for deepfake abuse were raised well before its public rollout, yet meaningful enforcement actions have failed to materialize. They argue that platform accountability must extend beyond policy statements and include decisive enforcement when AI systems are weaponized against vulnerable populations.
The letters sent to Apple and Google highlight the broader implications for AI safety and tech regulation, noting that unchecked AI sexual exploitation erodes trust in digital platforms and places women and children at disproportionate risk. Advocacy leaders stress that app store operators play a critical gatekeeping role and cannot distance themselves from harms enabled by applications they approve and distribute.
As regulators worldwide continue to examine content moderation failures and the responsibilities of technology companies, this controversy adds pressure on Apple and Google to demonstrate that their marketplaces are not safe havens for tools linked to illegal or abusive practices. Civil society groups maintain that removing access to X and Grok would send a clear signal that violations involving nonconsensual sexual deepfakes will not be tolerated.