Every team I talk to has the same complaint: their AI coding assistant writes code that looks nothing like their codebase. Different naming conventions, wrong patterns, imports from libraries they don't use. The fix is embarrassingly simple.
What is a project context file and why does it matter?
A project context file is a markdown document that sits in your repository root and gets automatically loaded by your AI coding assistant every time it starts a session. CLAUDE.md for Claude Code, .cursorrules for Cursor, .github/copilot-instructions.md for GitHub Copilot. Same concept, different filenames.
Think of it as the onboarding document you'd write for a new hire, except this new hire processes it in 200 milliseconds and follows it literally. The assistant reads it before every interaction, so everything you put in there shapes every response you get.
Without a context file, your assistant is working from its training data alone. It'll write perfectly valid code that happens to use Express when you're on Fastify, camelCase when you use snake_case, and class components when your entire codebase is hooks. With a good context file, those problems disappear on day one.
What should go in a project context file?
Here is the thing most guides get wrong: they tell you to dump your entire architecture document into the context file. Do not do this. Context files have a token budget, and every token you waste on obvious things is a token not spent on the things your assistant actually gets wrong.
The golden rule: only include what the assistant would get wrong without being told. If it would naturally write TypeScript in a TypeScript project, you don't need to say "use TypeScript." If it would never guess that you use a custom error handling pattern, that goes in.
- Tech stack specifics (framework versions, key libraries)
- Naming conventions that differ from defaults
- File/folder structure patterns
- Testing approach and preferred libraries
- Git workflow and commit message format
- Error handling patterns unique to your project
- Import ordering and module resolution rules
- Security constraints and compliance requirements
- What NOT to do (banned patterns, deprecated APIs)
How do you structure a CLAUDE.md for maximum impact?
After iterating on context files across a dozen projects, here is the structure we landed on at Fordel. It is ordered by what matters most — the assistant reads top-to-bottom, and if context gets truncated, you want the critical stuff processed first.
Section 1: Project identity (3-5 lines)
Start with what this project is and its core tech stack. Keep it brutally short. The assistant needs to know whether it is working on a Next.js 15 app or a Go microservice, not your product vision statement.
A good opener looks like: "Fordel Studios website. Next.js 15 App Router, TypeScript strict mode, Sanity CMS, Tailwind CSS v4, deployed on Vercel. No Pages Router, no getServerSideProps, no API routes for data fetching — use Server Components and server actions."
That is 40 words and it already prevents the five most common mistakes an assistant would make in a Next.js project.
Section 2: Conventions that break defaults
This is where the real value lives. List every convention where your project differs from what the assistant would assume. Be specific and give examples.
Bad: "Follow our coding standards." The assistant has no idea what your standards are. Good: "Component files use PascalCase.tsx. Utility functions use kebab-case.ts. All React components export as named exports, never default exports. Server Components go in app/, client components go in src/components/ with a 'use client' directive."
Bad: "Write clean code." Good: "No barrel exports (index.ts re-exports). Import directly from the source file. Prefer composition over inheritance. Maximum file length: 300 lines — if longer, split."
Section 3: The "Do NOT" list
This might be the most important section. AI assistants are trained on millions of repositories and will reach for popular patterns even when they are wrong for your project. Explicitly ban what you do not want.
In our projects, this section includes things like: "Do NOT add console.log for debugging — use the structured logger from @/lib/logger. Do NOT use any CSS-in-JS. Do NOT create new API routes when a server action would work. Do NOT install new dependencies without asking first."
Section 4: File structure map
You do not need to list every file. Give the assistant a mental model of where things live. Think of it as the tree output filtered to only the directories that matter.
We use a simple indented list: "src/app/ — route segments (App Router). src/components/atoms/ — small reusable components. src/components/organisms/ — complex composed components. src/lib/ — shared utilities and clients. src/sanity/ — CMS schemas and queries. public/ — static assets."
The assistant uses this to know where to put new files and where to look for existing ones. Without it, you will get new utility files created in random locations.
Section 5: Testing and quality gates
If you have specific testing conventions, state them here. "Unit tests use Vitest, not Jest. Test files sit next to source files as *.test.ts. Integration tests go in __tests__/integration/. Run tests with pnpm test. E2E tests use Playwright — config is in playwright.config.ts."
Include your CI requirements too: "All PRs must pass TypeScript strict checks, ESLint, and Prettier before merge. Run pnpm lint && pnpm typecheck before committing."
How do you handle multi-file context without blowing the token budget?
Claude Code supports nested CLAUDE.md files. You can put one in the repo root for project-wide rules and additional ones in subdirectories for domain-specific context. The assistant loads all of them, with the most local one taking priority.
Here is how we structure it for larger projects:
- CLAUDE.md (root) — Stack, global conventions, git workflow
- src/components/CLAUDE.md — Component patterns, design system tokens, accessibility rules
- src/lib/CLAUDE.md — Utility conventions, error handling, logging patterns
- src/sanity/CLAUDE.md — Schema conventions, GROQ query patterns, migration rules
- scripts/CLAUDE.md — Script conventions, environment variable handling
This keeps each file small and focused. The root file stays under 200 lines, and subdirectory files stay under 50. The assistant gets the full picture without any single file getting unwieldy.
For Cursor, you get a single .cursorrules file, so you have to be more selective. Focus on the conventions that cause the most frequent mistakes and link to documentation for the rest.
What are the most common mistakes teams make with context files?
Mistake 1: Writing an essay
The most common failure mode. Teams dump their entire architecture decision record, coding standards document, and onboarding guide into the context file. The assistant processes all of it, but the signal-to-noise ratio drops. Be ruthless about what earns a place in the file. If you would not repeat it to a senior engineer joining your team on their first day, it probably does not belong.
Mistake 2: Being vague
"Follow best practices" means nothing. "Use the repository pattern" is better but still ambiguous. "Data access goes through src/lib/repositories/*.ts using the BaseRepository class. See src/lib/repositories/user.repository.ts for the pattern" — that is actionable. The assistant can read that file and replicate the pattern exactly.
Mistake 3: Never updating it
Your context file is a living document. When you find the assistant repeatedly making the same mistake, add a rule. When your conventions change, update the file. We review ours during sprint retros — takes two minutes, saves hours.
Mistake 4: Duplicating what the code already says
If your ESLint config enforces import ordering, you do not need to specify import ordering in the context file. If your tsconfig.json has strict mode enabled, the assistant will pick that up from the config. Focus on the things that are not machine-enforced.
Mistake 5: Forgetting the negative space
Teams write what they want but forget to write what they do not want. AI assistants are biased toward popular patterns. If you are not using Redux and your codebase has zero Redux code, the assistant will not randomly add it. But if you are using Zustand instead of React Context (which the assistant might default to), you need to say that explicitly.
How do you measure whether your context file is working?
We track a simple metric: how many times per week do we have to correct the assistant on something that should have been in the context file? When we first introduced CLAUDE.md across our projects, that number was 15-20 corrections per week. After three weeks of iterating, it dropped to 2-3.
The workflow looks like this: every time you correct the assistant, ask yourself whether a context file rule would have prevented it. If yes, add the rule immediately. If no, it was a genuine edge case that does not need a rule.
You can also test your context file by starting a fresh session and asking the assistant to scaffold a new feature. If it uses the right patterns, naming conventions, and file locations without being told, your context file is doing its job.
“Every correction you give your AI assistant that you do not encode into a context file is a correction you will give again tomorrow.”
What does a complete, production-ready CLAUDE.md look like?
Here is a simplified version of what we use on a real Next.js project. This is not the full file — ours has project-specific details — but the structure and density is representative.
The file starts with a one-line project description and stack. Then the conventions section: TypeScript strict, named exports only, no default exports except for Next.js pages. Server Components by default, 'use client' only when state or browser APIs are needed. Tailwind for styling, no CSS modules, no styled-components. Error boundaries at route segment level, not per-component.
Then the structure section: where components live, where utilities go, where tests sit. Then the testing section: Vitest for unit tests, Playwright for E2E, test IDs follow data-testid convention.
Then the banned patterns: no any types (use unknown and narrow), no barrel exports, no relative imports deeper than two levels (use path aliases), no synchronous file system operations in API routes.
Then the git section: branch naming, commit message format, PR template requirements. The whole file is 120 lines. Readable in under a minute.
What do you actually have now?
If you followed this guide, you have a project context file that transforms your AI coding assistant from a generic code generator into something that understands your specific project. Your assistant now knows your stack, your conventions, your file structure, and most importantly, what not to do.
- A CLAUDE.md (or .cursorrules) in your repo root with 5 focused sections
- Subdirectory context files for domain-specific conventions (if using Claude Code)
- A "Do NOT" section that prevents the assistant's most common wrong guesses
- A workflow for updating the context file when you catch repeated mistakes
- A way to test the file by scaffolding a feature in a fresh session
The investment is about 30 minutes to write the initial file and 2 minutes per week to maintain it. The return is every interaction with your AI assistant producing code that looks like your team wrote it, not like it was copy-pasted from a random tutorial.
At Fordel, we treat the CLAUDE.md as a first-class project artifact. It gets code reviewed. It gets updated in the same PR as convention changes. It is version controlled. And it is the single change that had the biggest impact on our AI-assisted development velocity in 2026.




