This article was written with the help of artificial intelligence.
The ideas, concerns, and experiences are mine — the AI simply helped organize them into readable English instead of my usual developer brain dump.
So yes, AI helped write this… but hopefully it won't help leak my API keys.
A story that is becoming more common
Recently I came across a story that is starting to appear more frequently in developer communities.
A developer was experimenting with AI coding agents to speed up development.
He configured the agent so it could analyze the entire repository and help generate code with full context.
Everything worked great for weeks.
Until one day he checked the billing dashboard of one of the APIs his project used.
The number was shocking.
Thousands of dollars in usage.
Usage that he had never generated himself.
After investigating, he discovered the issue.
An API key had been exposed.
The key was stored in a configuration file inside the repository.
The AI agent had read that file while building context about the project.
Later, through a prompt injection in another conversation or repository, someone managed to get the model to reveal part of that information.
From there, the key started circulating.
Someone began using it.
And the bill arrived.
Stories like this are becoming more common:
- unexpected OpenAI usage spikes
- cloud APIs abused by bots
- leaked map service keys
- exposed backend credentials
The problem isn't AI itself.
The problem is how we configure it.
The hidden risk: AI agents can read your repository
Modern coding agents such as:
- Claude Code
- Codex CLI
- Gemini CLI
can analyze your repository to understand how your project works.
This allows them to:
- generate code
- refactor files
- understand architecture
- provide contextual suggestions
But there is an important implication.
They can read files in your repository.
And in many iOS projects, some of those files contain sensitive information.
For example:
.xcconfig
.env
GoogleService-Info.plist
Secrets.swift
fastlane/.envThese files often contain things like:
- API keys
- service tokens
- private endpoints
- analytics credentials
If an agent reads those files, that information may become part of the context sent to the AI model.
And that's where the real risk begins.
Prompt Injection: a new attack surface
A prompt injection occurs when malicious instructions are introduced into the context of an AI agent.
If the AI agent has access to sensitive files, it might follow these instructions and expose information that should never leave the project.
Because of this, many teams are starting to adopt a new principle:
AI agents should have restricted access to your repository.
Especially when it comes to files containing secrets.
Sensitive files commonly found in iOS projects
In a typical iOS project, secrets are often stored in files like these:
.xcconfig
.env
.env.
GoogleService-Info.plist
Secrets.swift
*.p12
*.mobileprovision
fastlane/.env
fastlane/Appfile
fastlane/MatchfileThese files and directories should not be accessible to AI agents.
How to block sensitive files in Claude Code
Claude Code allows defining access rules through a configuration file.
Create the following file:
.claude/settings.json
{
"permissions": {
"deny": [
"Read(**/*.xcconfig)",
"Read(**/.env*)",
"Read(**/Secrets.swift)",
"Read(**/GoogleService-Info.plist)",
"Read(**/fastlane/*.env)",
"Read(**/*.p12)",
"Read(**/*.mobileprovision)",
"Read(**/Certificates/**)",
"Read(**/Keys/**)"
]
}
}This prevents Claude Code from reading those files entirely.
Even if a prompt injection attempts to access them, the agent will not have permission.
You can read more about Claude Code permission settings in the official documentation:
https://code.claude.com/docs/en/settings
How to block sensitive files in Codex CLI
At the time this article was written, Codex CLI does not provide a standard,
built-in mechanism to define ignore rules for files that should never be read by the agent.
There have been proposals in the Codex repository to support things like a
.codexignore file or configurable ignore patterns, but these features are
not consistently available yet.
Because of this, developers should rely on a combination of repository
structure and .gitignore to reduce exposure.
For example:
*.xcconfig
.env
.env.*
GoogleService-Info.plist
Secrets.swift
*.p12
*.mobileprovision
fastlane/.env
Certificates/
Keys/This does not guarantee that an agent will never access these files, but it
helps ensure they are not tracked in the repository or easily included in automated scans.
The safest approach is still to avoid storing secrets in the repository at all
and to load them through environment variables or secret managers.
How to block sensitive files in Gemini CLI
Gemini CLI also analyzes repository files to build context.
You can exclude files using rules similar to .gitignore.
Create: .geminiignore
Example of configuration:
*.xcconfig
.env*
Secrets.swift
GoogleService-Info.plist
*.p12
*.mobileprovision
fastlane/.env
Certificates/
Keys/A simple rule to remember
When using AI agents in your development workflow, assume the following:
If the agent can read a file, that information might end up in the model's context.
Because of that, the safest rule is simple:
If a file contains secrets, an AI agent should not be able to read it.
For more details about configuring Gemini tools and repository access, you can check the official Gemini documentation:
https://geminicli.com/docs/cli/gemini-ignore/
Final thoughts
AI coding agents are incredibly powerful tools and are quickly becoming part of modern development workflows.
But they also introduce a new category of security risks that many teams are still learning about.
In iOS projects especially, configuration files often contain sensitive information.
Restricting access to those files is a simple step that can prevent:
- credential leaks
- API abuse
- unexpected billing surprises
- prompt injection attacks
AI can make us more productive.
But it should never make our secrets easier to steal.