Mercor says it was hit by cyberattack tied to compromise of open-source LiteLLM project
By Jakub Antkiewicz
•2026-04-01T09:06:08Z
AI recruiting startup Mercor has confirmed it was impacted by a recent supply chain attack that compromised the popular open-source project LiteLLM. The company acknowledged the incident after the extortion group Lapsus$ claimed it had breached Mercor’s systems and posted samples of allegedly stolen data. This event places a spotlight on the inherent security risks within the rapidly expanding AI sector, where reliance on shared, open-source components can create widespread vulnerabilities.
The attack originated from malicious code inserted into a package associated with LiteLLM, a project used by what Mercor described as “thousands of companies.” Mercor, which works with major players like OpenAI and Anthropic to train AI models, was valued at $10 billion after a $350 million Series C round in October 2025. In a statement, spokesperson Heidi Hagberg confirmed the company “moved promptly to contain and remediate” the incident and is conducting an investigation with third-party forensics experts. Lapsus$ has since shared samples referencing Slack data and video recordings of interactions on Mercor's platform, though the connection between the LiteLLM compromise and the Lapsus$ data claim remains unclear.
The Mercor incident serves as a critical example of the cascading security failures possible in the AI ecosystem's software supply chain. With libraries like LiteLLM being downloaded millions of times per day, a single compromise can expose a vast number of organizations, from startups to established enterprises. The attack has already prompted LiteLLM to overhaul its compliance processes, but the full extent of the damage across the industry is still under investigation. This highlights a growing tension between the need for rapid development using open-source tools and the imperative to secure the resulting AI applications against sophisticated threats.
The compromise of a single, popular open-source library demonstrates that the interconnected nature of the AI development stack has created a systemic risk, turning a vulnerability in one project into a potential crisis for thousands of dependent companies.