Skip to main content
Research
Engineering7 min read

12 Things to Check Before You Trust Your AI Coding Tools With Your Codebase

A fake Gemini npm package just got caught stealing tokens from Claude, Cursor, and other AI tools. Here are 12 things every engineer should check right now to make sure their AI development environment is not leaking credentials, context, or code.

AuthorAbhishek Sharma· Head of Engg @ Fordel Studios
12 Things to Check Before You Trust Your AI Coding Tools With Your Codebase

Your AI assistant can read every file in your project. Have you checked what that actually includes?

This week, security researchers found a fake Gemini npm package designed to exfiltrate authentication tokens from Claude, Cursor, and other AI coding tools. Separately, the NomShub vulnerability chain exposed hidden risks in how AI tools resolve dependencies. These are not theoretical attacks. They are happening now, targeting the tools engineers use every day.

Here are 12 things to check before you trust your AI coding tools with your codebase.

···

Is your .env file excluded from AI context?

Audit every .env, .env.local, and .env.production file in your project. Most AI coding tools will happily read these if they are not explicitly excluded. Add them to .cursorignore, .claudeignore, or whatever ignore mechanism your tool supports. The default behavior for most AI assistants is to read anything they can access — and environment files contain database URLs, API keys, and third-party secrets that have no business being sent to a cloud model.

Are your AI tool tokens stored securely?

Check where your Cursor, Claude Code, GitHub Copilot, and other AI tool authentication tokens live on disk. The fake Gemini package specifically targeted these token files. If your tokens are stored in plaintext config files with standard permissions, any malicious package in your dependency tree can read them. Use your OS keychain where supported, and verify file permissions on any config directories your AI tools create.

Have you audited your AI tool extensions and plugins?

Review every VS Code extension, Cursor plugin, and MCP server you have installed. Each one is a potential attack vector with access to your editor context, open files, and sometimes your terminal. Remove anything you installed months ago and forgot about. Check the publisher, download count, and last update date. The NomShub research showed that AI tool plugin ecosystems have the same typosquatting risks as npm.

Is your .gitignore actually comprehensive?

Run a quick check: does your .gitignore cover .cursor/, .claude/, .vscode/settings.json, and any other AI tool config directories? These directories often contain conversation history, cached context, and tool configuration that can reveal your codebase structure, internal APIs, and business logic to anyone who clones the repo. A surprising number of projects accidentally commit AI tool state.

Does your project have an AI context boundary?

Define explicitly what your AI tools can and cannot access. Most tools support ignore files or context restrictions. Create a .cursorignore or equivalent that excludes: credentials directories, infrastructure configs with secrets, customer data fixtures, internal documentation with trade secrets, and any vendored code you do not want sent to a third-party model. Without explicit boundaries, your AI tool context window becomes a data exfiltration surface.

Are you running dependency audits that catch AI-targeted attacks?

Add npm audit, socket.dev, or snyk to your CI pipeline if you have not already. The fake Gemini package was caught because someone actually looked. Traditional dependency scanners check for known CVEs, but AI-targeted supply chain attacks are new enough that many scanners miss them. Socket.dev specifically monitors for suspicious package behavior like network calls, filesystem access, and install scripts — exactly the patterns these AI-token-stealing packages use.

48hrsAverage time before malicious npm packages are reportedThe fake Gemini package was live and installable before detection

Do your AI tools have network access you did not authorize?

Check what network calls your AI coding tools and their plugins make. Some MCP servers, extensions, and AI tool integrations phone home with telemetry that includes file contents, conversation snippets, or code context. Use a network monitor like Little Snitch or Wireshark to see what leaves your machine during a coding session. If an extension is sending data to a domain you do not recognize, remove it immediately.

Is your AI tool configuration version-controlled and reviewed?

If your team shares AI tool configs like .cursorrules, CLAUDE.md, or .github/copilot-instructions.md, treat them as code. These files shape what your AI tools do, what context they access, and how they behave. A malicious or careless edit to a shared AI config can instruct the tool to include sensitive files in its context, disable safety checks, or change code generation patterns. Put them in code review.

Have you checked what your AI tool sends to the cloud?

Read the privacy and data handling documentation for every AI tool you use. Understand whether your code is used for training, how long prompts are retained, and whether your conversations are stored. For regulated industries — healthcare, finance, legal — this is not optional. If your AI tool sends code snippets to a cloud API, every file it reads becomes data you are sharing with a third party. Enterprise plans with zero-retention policies exist for a reason.

Are your team members using approved AI tools only?

Survey your team. Engineers install AI tools the way they install browser extensions — freely and without review. Shadow AI tooling is a real problem: unapproved tools with unknown data handling policies processing your proprietary codebase. Establish an approved list, document why each tool is approved, and make the approval process fast enough that engineers do not route around it.

Do you rotate credentials that AI tools have accessed?

If you suspect any AI tool or extension has been compromised, rotate every credential it could have accessed. This includes API keys in .env files, AI tool authentication tokens, SSH keys in your home directory, and any tokens stored in config files the tool can read. The fake Gemini package specifically harvested authentication tokens — if you installed it or anything like it, assume those tokens are burned.

Is there an incident response plan for AI tool compromise?

Write down what happens if a team member AI coding tool is compromised. Who gets notified? What credentials get rotated? How do you audit what code was generated during the compromised period? Most teams have incident response for production systems but nothing for development environment compromise. Given that AI tools now have deeper access to your codebase than most production services, this gap is indefensible.

···
The AI Dev Security Checklist — Quick Reference
  • Exclude all .env files from AI tool context
  • Secure AI tool tokens with OS keychain, not plaintext
  • Audit every extension, plugin, and MCP server installed
  • Verify .gitignore covers all AI tool config directories
  • Define explicit AI context boundaries with ignore files
  • Run dependency audits that detect behavioral anomalies
  • Monitor network traffic from AI tools and plugins
  • Code-review shared AI configuration files
  • Read data handling policies for every AI tool in use
  • Maintain an approved AI tools list for the team
  • Rotate credentials after any suspected compromise
  • Document incident response for dev environment compromise

The tools are powerful. That is exactly why the attack surface matters. Every file your AI assistant reads is a file a compromised extension can exfiltrate. Every token your tool stores is a token a malicious package can steal. The security posture of your development environment is now as important as your production environment — possibly more so, because it is where all your code lives before it ships.

Run through this checklist today. Not next sprint. Today.

Build with us

Need this kind of thinking applied to your product?

We build AI agents, full-stack platforms, and engineering systems. Same depth, applied to your problem.

Loading comments...