A new cybersecurity study published by Wiz has found that the majority of the world’s top artificial intelligence companies are inadvertently exposing confidential information online, underscoring a growing disconnect between rapid innovation and basic security discipline.
According to the report, 65% of the 50 leading AI companies analyzed had leaked verified secrets on GitHub, including API keys, authentication tokens, and other sensitive credentials. Many of these exposures were buried deep within repositories or deleted forks areas rarely examined by standard scanning tools.
A Preventable Error
According to Glyn Morgan, the Country Manager for the UK & Ireland at Salt Security, the trend is both “glaring and avoidable.”
“When AI companies inadvertently leak their API keys, it points to a fundamental failure in governance and configuration,” he said. “It hands attackers a direct route into systems, models, and data while bypassing the usual defensive layers.”
The Wiz research underlines that these security oversights are far from isolated developer mistakes. As enterprises increasingly partner with AI startups, they become quite exposed to the same kinds of vulnerabilities. Several of the leaks, the report warns, could have exposed private models, organizational structures, and even training data, potentially compromising competitive advantage and intellectual property.
The stakes are high: affected companies collectively have a market valuation of over $400 billion.
Real-World Examples
The study enumerates several examples of exposed secrets:
• LangChain was leaking various LangSmith API keys, including those with permissions to manage organizational members.
• An enterprise-tier ElevenLabs API key was found inside a plaintext file.
• One company from the Forbes AI 50 had an exposed Hugging Face token in a deleted fork, which allowed access to about 1,000 private models. That same company also leaked Weights & Biases keys, exposing data linked to model training.
Hidden Dangers Below the Surface
Traditional code scanning is missing such exposures, according to Wiz, because it is focused narrowly on public repositories. To find deeper vulnerabilities, its researchers adopted a three-dimensional approach they call Depth, Perimeter, and Coverage.
Depth: full commit histories, deleted forks, workflow logs, and gists are reviewed places standard scanners normally don’t look.
• Perimeter expanded the search beyond company repositories to include employees and contributors, who may unwittingly commit secrets to personal projects.
• Coverage sought AI-specific secret types like Weights & Biases, Groq, and Perplexity keys, which conventional tools usually tend to miss.
It appears from the results that the development pace of the AI sector has outpaced its security maturity. Nearly half of Wiz’s attempts at responsible disclosure either received no response or failed to reach the correct contacts, pointing to significant gaps in the vulnerability reporting and response procedures.
What Firms Should Do Now
Wiz enumerates three urgent steps for security leaders:
1. Consider all developers and contributors part of the attack surface. Implement the most stringent policies and procedures when onboarding them, including multi-factor authentication and explicit separation of personal versus professional use of GitHub.
2. Modernize internal secret-scanning practices. Go beyond simple repository checks and adopt comprehensive scans that mirror the Depth-Perimeter-Coverage approach.
3. Extend due diligence to third-party vendors. CISOs should review how AI partners manage credentials and disclosures before integrating their tools. Speed versus Security The report concludes that the very speed driving AI breakthroughs now poses one of the industry’s greatest security threats. As Wiz cautions, “For AI innovators, the message is clear: speed cannot come at the expense of security” a warning extending equally to the enterprises depending on them.


