AI API key exposure is a scenario that plays out at UK businesses every week.
This isn't a sophisticated attack. There was no phishing email, no malware, no social engineering. A developer used an AI tool to write code, the code had a credential problem, and the git repository did the rest. The scanner found it before the developer even realised anything was wrong.
GitGuardian's 2024 State of Secrets Sprawl report found over 12.8 million secrets hardcoded in public GitHub repositories in a single year — a figure that has grown every year alongside the adoption of AI coding tools.
Four ways AI-generated code exposes your systems
Hardcoded API keys are the most visible problem, but they're one of four exposure patterns that consistently appear in AI-generated code. Understanding all four matters because each requires a different fix.
Why AI coding tools produce insecure patterns by default
AI code generators optimise for code that works. They're trained on repositories that contain working examples, and working examples in development environments routinely hardcode credentials, skip authentication on internal routes, and use permissive CORS policies. That's how developers move fast. It's also how production systems get misconfigured.
The AI isn't doing anything wrong by its own standards. It's producing code that follows the patterns most prevalent in its training data. Production security hardening — environment variables, secrets management systems, strict CORS policies, mandatory authentication on every route — is an afterthought in most of the code it's learned from, so it's an afterthought in the code it generates.
The deeper issue is that developers who rely heavily on AI-generated code often haven't deeply engaged with what the code actually does. They reviewed it for functionality — does it solve the problem? — not for security properties. The credential sitting in line 47 looked like a placeholder. It wasn't.
The git history problem nobody talks about
Here's what makes exposed credentials especially hard to resolve once they're in. Even if you catch the problem and delete the API key from the latest version of the file, it's still in your git history. Every commit you've ever made is preserved. Anyone who clones the repository — or who runs a git log command — can retrieve the credential from earlier commits.
Properly removing a credential from git history requires a rewrite of every commit that contained it, followed by a force push to every branch. It's a disruptive operation that most development teams handle badly under pressure. And the window between the original commit and the fix is exactly where the damage happens.
GitHub offers automated credential scanning that can catch some patterns before a push completes — but it doesn't catch everything, it doesn't work on private repositories by default, and it certainly doesn't help with credentials that slipped through before the scanning was enabled.
What actually happens when credentials are compromised
The outcomes aren't hypothetical. In real incidents with AI-generated code exposure:
- AWS account takeover from exposed access keys, with attackers spinning up compute resources for crypto mining — charges running into thousands of pounds before the account is suspended
- Database dumps from unauthenticated admin endpoints that the developer assumed were only accessible internally
- Third-party API accounts compromised and used to send bulk spam, resulting in the business's sending domain being blacklisted
- Cloud storage buckets accessed via exposed credentials, with customer data exfiltrated in the hours before the key was rotated
In most of these cases, the developer involved had used an AI tool to write the affected code. None of them were inexperienced — they were doing what modern development practice increasingly looks like. The problem is that modern development practice hasn't caught up with the security implications of AI-generated code.
Automated scanners sweep public GitHub repositories for exposed credentials 24 hours a day. One commit with a hardcoded key is often all it takes.
The business consequence isn't just the immediate breach. It's the regulatory exposure under UK GDPR if customer data was involved, the reputational impact if the breach becomes public, and the remediation cost — credential rotation across every system that used the compromised key, followed by a full audit of what was accessed in the interim.
Detect, Assess, Defend
The gap between how AI-assisted teams work and how they should work
The honest assessment is that most development teams using AI coding tools haven't updated their security practices to account for the new risk profile. The tools have changed the speed and volume of code production dramatically. The review and security-checking processes haven't kept pace.
That's a solvable problem. But solving it requires someone to do the scanning first — to find out what's actually in the codebase right now — before establishing the processes that prevent recurrence. The answer to "have we already committed credentials?" is almost always more interesting than teams expect.
How BBS helps with this
- Vibe Code Security Review — We scan your codebase for hardcoded credentials, unauthenticated endpoints and misconfigured access controls — including full git history analysis to find secrets that were committed and deleted.
- AI Security Gap Assessment — Live testing of all API surfaces plus git history scanning, producing a prioritised finding register with severity ratings and clear remediation steps.
- Remediation Support — Code-level fix guidance and a secrets rotation plan alongside every finding, so your team knows exactly what to change and in what order.
- Secure Dev Training — We train developers to spot and reject credential patterns in AI-generated code before they reach the commit stage — the most effective point in the process to stop this class of problem.