AI API key exposure is a scenario that plays out at UK businesses every week.

This isn't a sophisticated attack. There was no phishing email, no malware, no social engineering. A developer used an AI tool to write code, the code had a credential problem, and the git repository did the rest. The scanner found it before the developer even realised anything was wrong.

The scale of the problem

GitGuardian's 2024 State of Secrets Sprawl report found over 12.8 million secrets hardcoded in public GitHub repositories in a single year — a figure that has grown every year alongside the adoption of AI coding tools.

Four ways AI-generated code exposes your systems

Hardcoded API keys are the most visible problem, but they're one of four exposure patterns that consistently appear in AI-generated code. Understanding all four matters because each requires a different fix.

Common API exposure patterns in AI-generated code
Hardcoded API keys
Keys in source code & git history. The most common and most immediately exploitable pattern.
Open endpoints
No auth on admin or data routes. AI-generated APIs often skip authentication on internal-facing routes.
Cloud credentials
AWS/Azure keys committed to repo. Infrastructure-as-code snippets frequently include credential placeholders that get filled in and committed.
Misconfigured CORS
Any origin allowed on the API. AI tools default to permissive CORS settings because they work in every development environment.

Why AI coding tools produce insecure patterns by default

AI code generators optimise for code that works. They're trained on repositories that contain working examples, and working examples in development environments routinely hardcode credentials, skip authentication on internal routes, and use permissive CORS policies. That's how developers move fast. It's also how production systems get misconfigured.

The AI isn't doing anything wrong by its own standards. It's producing code that follows the patterns most prevalent in its training data. Production security hardening — environment variables, secrets management systems, strict CORS policies, mandatory authentication on every route — is an afterthought in most of the code it's learned from, so it's an afterthought in the code it generates.

"The AI produces code that works in development. The problem is that 'works in development' and 'safe in production' are two very different standards."

The deeper issue is that developers who rely heavily on AI-generated code often haven't deeply engaged with what the code actually does. They reviewed it for functionality — does it solve the problem? — not for security properties. The credential sitting in line 47 looked like a placeholder. It wasn't.

The git history problem nobody talks about

Here's what makes exposed credentials especially hard to resolve once they're in. Even if you catch the problem and delete the API key from the latest version of the file, it's still in your git history. Every commit you've ever made is preserved. Anyone who clones the repository — or who runs a git log command — can retrieve the credential from earlier commits.

Properly removing a credential from git history requires a rewrite of every commit that contained it, followed by a force push to every branch. It's a disruptive operation that most development teams handle badly under pressure. And the window between the original commit and the fix is exactly where the damage happens.

GitHub offers automated credential scanning that can catch some patterns before a push completes — but it doesn't catch everything, it doesn't work on private repositories by default, and it certainly doesn't help with credentials that slipped through before the scanning was enabled.

What actually happens when credentials are compromised

The outcomes aren't hypothetical. In real incidents with AI-generated code exposure:

In most of these cases, the developer involved had used an AI tool to write the affected code. None of them were inexperienced — they were doing what modern development practice increasingly looks like. The problem is that modern development practice hasn't caught up with the security implications of AI-generated code.

Developer reviewing code for security vulnerabilities

Automated scanners sweep public GitHub repositories for exposed credentials 24 hours a day. One commit with a hardcoded key is often all it takes.

The business consequence isn't just the immediate breach. It's the regulatory exposure under UK GDPR if customer data was involved, the reputational impact if the breach becomes public, and the remediation cost — credential rotation across every system that used the compromised key, followed by a full audit of what was accessed in the interim.

Detect, Assess, Defend

The consultant's approach to AI code credential exposure
Detect
Git secrets scanning
GitGuardian, truffleHog etc.
DAST on API endpoints
Dynamic testing for open routes
Cloud account anomaly detection
Flag unexpected resource usage
Assess
Volume of AI-generated code committed?
Scope of exposure risk
Secrets management in use?
Vault vs hardcoded credentials
Auth coverage on all endpoints?
Every route, not just the obvious ones
Defend
Secrets vaulting
Not hardcoding, ever
Pre-commit hooks
Block credential commits at source
API authentication enforcement
Auth on every endpoint by default
Quarterly penetration testing
Live testing of all API surfaces

The gap between how AI-assisted teams work and how they should work

The honest assessment is that most development teams using AI coding tools haven't updated their security practices to account for the new risk profile. The tools have changed the speed and volume of code production dramatically. The review and security-checking processes haven't kept pace.

That's a solvable problem. But solving it requires someone to do the scanning first — to find out what's actually in the codebase right now — before establishing the processes that prevent recurrence. The answer to "have we already committed credentials?" is almost always more interesting than teams expect.

How BBS helps with this

  • Vibe Code Security Review — We scan your codebase for hardcoded credentials, unauthenticated endpoints and misconfigured access controls — including full git history analysis to find secrets that were committed and deleted.
  • AI Security Gap Assessment — Live testing of all API surfaces plus git history scanning, producing a prioritised finding register with severity ratings and clear remediation steps.
  • Remediation Support — Code-level fix guidance and a secrets rotation plan alongside every finding, so your team knows exactly what to change and in what order.
  • Secure Dev Training — We train developers to spot and reject credential patterns in AI-generated code before they reach the commit stage — the most effective point in the process to stop this class of problem.