Skip to main content
Favicon of AGENTS.md

AGENTS.md

AGENTS.md is a vendor-neutral standard that tells AI coding agents how to work in a repository with commands, conventions, and rules.

Reviewed by Mathijs Bronsdijk · Updated Apr 18, 2026

ToolOpen Source + PaidUpdated 26 days ago
Open Source15+ Integrations60,000+ Users
Used by over 60,000 open-source projectsStewarded by the Agentic AI FoundationReduces agent-generated bugs by 35-55%60% time savings during onboarding with AGENTS.mdLLM-generated files reduce performance by 3%Human-written files improve success rates by 4%Supports major AI tools like OpenAI Codex and Google JulesEncodes domain-specific knowledge for AI agents
Screenshot of AGENTS.md website

What is AGENTS.md?

AGENTS.md is an open markdown standard for telling AI coding agents how to work inside a repository. If README.md is written for humans, AGENTS.md is written for tools like OpenAI Codex, Google Jules, Cursor, GitHub Copilot agent mode, Aider, Devin, and others that need concrete instructions about commands, conventions, and project-specific rules before they start changing code.

We researched AGENTS.md as both a file format and a movement. It was created as a vendor-neutral convention instead of a proprietary config tied to one coding assistant. The format is intentionally simple, just a markdown file placed in the repo, usually at the root, with no rigid schema. That simplicity is part of the appeal. Teams can write plain-language guidance about how to build, test, and safely modify code, and multiple agent tools can read the same file. Stewardship has moved under the Agentic AI Foundation within the Linux Foundation, which gives it more credibility as shared infrastructure rather than a single company's preference.

The reason AGENTS.md caught on is practical. AI agents often need information that does not belong in a README, such as exact test commands, repo-specific workflows, nested project boundaries, or business rules that are obvious to maintainers but invisible in code. Adoption has grown quickly, with the project site citing more than 60,000 open-source repositories using it. But the story is not just hype. Recent academic work from ETH Zurich found that AGENTS.md helps only in certain cases, and poorly written files can increase cost and even reduce success rates. So AGENTS.md is best understood as useful project memory, not magic.

Key Features

  • Universal markdown format: AGENTS.md uses plain markdown with no required schema. That matters because teams can write one file that works across many tools instead of maintaining a custom format for each editor or coding agent.

  • Broad tool support: The standard is recognized by major agent tools including OpenAI Codex, Google Jules, Cursor, GitHub Copilot agent workflows, Aider, Devin, Factory, and UiPath tools. In practice, this cross-tool support is the main reason teams adopt it, because the same repository guidance can travel across 15 or more tools.

  • Hierarchical instructions: Many implementations support nested AGENTS.md files, where a file closer to the working directory overrides a higher-level one. OpenAI has shown this pattern inside large monorepos, even using dozens of AGENTS.md files, which helps teams give local instructions without turning the root file into a wall of text.

  • Command-first guidance: The most effective AGENTS.md files put exact build and test commands near the top. GitHub analysis of more than 2,500 repositories found that command sections had the highest return, because agents perform better when they can run explicit commands instead of guessing from package files or docs.

  • Project-specific constraints: AGENTS.md is good at encoding information the agent cannot infer, such as "use this custom script, not the default framework command" or "never touch this generated directory." Research suggests this is where the file earns its keep, because generic architectural summaries often add tokens without adding much value.

  • Monorepo support: Large repositories can place AGENTS.md files in subdirectories so each service or package has its own instructions. This matters for organizations where frontend, backend, data, and infra teams all have different rules, test flows, and ownership boundaries.

  • Open governance: The standard is now stewarded through the Linux Foundation's Agentic AI Foundation. For buyers and engineering leaders, that reduces the risk that the format disappears or shifts suddenly to serve one vendor's product strategy.

  • Low setup cost: Getting started usually means creating one markdown file in the repo root. Teams can have a basic version in minutes, then expand it over time as they notice where agents get confused or waste effort.

Use Cases

One common use case is open-source repositories trying to reduce friction for AI-assisted contributions. The AGENTS.md project reports adoption in more than 60,000 repositories, and GitHub's analysis of 2,500 repositories using agent instruction files found that detailed, well-structured guidance correlated with 35% to 55% fewer agent-generated bugs. That is not a promise for every team, but it shows why maintainers are willing to add one more file to the repo. They are not doing it for theory, they are trying to stop agents from running the wrong commands, violating style expectations, or touching the wrong parts of the codebase.

Another use case is monorepo coordination. OpenAI has publicly described a repository setup with 88 AGENTS.md files. That tells you something important about where this standard fits. It is not just for tiny side projects. It becomes useful when one repository contains multiple services, tools, and ownership zones, and a root README cannot realistically teach an agent how each area works. In those setups, a root AGENTS.md can explain global rules, while local files tell the agent how to test a specific package or what conventions apply in a subfolder.

We also found a growing pattern in data-heavy organizations. Some teams generate parts of AGENTS.md from governed metadata catalogs so agents can see current information about data sources, contracts, ownership, and lineage. The interesting part here is not the markdown file itself, but what it becomes, a bridge between software instructions and operational knowledge. In those environments, AGENTS.md is being used less like static documentation and more like a maintained interface to project reality.

There is also a quieter use case, onboarding AI tools. Research cited around AGENTS.md claims about 60% time savings when organizations introduce new AI tools into projects that already have maintained guidance files. The logic is simple. If a team has already written down exact commands, conventions, and constraints once, each new agent starts from the same baseline instead of learning the repo from scratch.

Strengths and Weaknesses

Strengths:

  • AGENTS.md solves a real documentation gap. README files are usually optimized for humans, not autonomous coding tools. Teams use AGENTS.md to move machine-oriented instructions out of human docs, which keeps the README cleaner and gives agents a predictable place to look.

  • The cross-tool story is stronger than most alternatives. A CLAUDE.md file helps inside Anthropic's tooling, and.cursorrules helps in Cursor, but AGENTS.md is the one format that multiple vendors have chosen to support. For teams testing several coding agents, that portability matters more than any single vendor feature.

  • It works especially well for exact commands and non-obvious rules. GitHub's repository analysis found command sections gave the highest payoff. In plain terms, agents do better when the file says "run this exact test command" than when it gives a broad essay about architecture.

  • It scales surprisingly well in monorepos. The nearest-file-wins pattern means teams can keep a short root file and put local instructions where they matter. That is a better fit than trying to maintain one giant instruction document for every package in a complex repo.

  • It is cheap to adopt. There is no platform migration, SDK, or contract to sign. A team can add one markdown file today, and if they hate it, they delete it tomorrow.

Weaknesses:

  • The real performance gains are modest. ETH Zurich researchers found that human-written context files improved success rates by only about 4% on average. That is useful, but much smaller than the marketing around agent guidance sometimes suggests.

  • Bad files can make things worse. The same ETH Zurich work found LLM-generated context files reduced performance by about 3% and increased inference costs by roughly 23%. So a rushed, auto-generated AGENTS.md can become expensive clutter rather than help.

  • More instructions often mean more steps. Even when human-written files helped, they also increased the number of agent steps and token costs. The researchers observed agents following instructions too literally, such as running broad test suites for simple fixes.

  • Tool fragmentation has not disappeared. Even though AGENTS.md is the closest thing to a common standard, many teams still maintain CLAUDE.md.cursorrules, Copilot instruction files, and AGENTS.md together. That creates duplicate documentation and drift.

  • It introduces a security review surface. Some tools automatically include AGENTS.md in prompts, which means changes to the file can quietly shape agent behavior. Security researchers have warned that in shared repos, AGENTS.md deserves code-review scrutiny because it can act like a hidden instruction layer.

Pricing

  • AGENTS.md standard: $0
  • Open-source project site and format: Free
  • Implementation cost: Internal team time

AGENTS.md itself is free. There is no paid plan because it is a standard and a convention, not a hosted SaaS product. What users actually spend is time, first to write the file, then to keep it accurate as build commands, test flows, and repo structure change.

The hidden cost is token usage in the tools that consume it. This came up clearly in the ETH Zurich research. Human-written context files improved outcomes a bit, but they also increased inference costs. LLM-generated files were worse, increasing cost while lowering success. So the pricing conversation is less about subscription fees and more about whether your AGENTS.md is short and useful or long and expensive.

Compared with proprietary alternatives, AGENTS.md is cheaper to maintain if you use multiple coding agents, because one shared file can cover most of the repo guidance. But if your team still keeps separate CLAUDE.md.cursorrules, and Copilot files for tool-specific behavior, the maintenance bill rises fast.

Alternatives

CLAUDE.md Anthropic's CLAUDE.md serves a similar purpose but is centered on Claude Code workflows. Teams that are all-in on Anthropic may prefer it because it maps closely to that product's behavior and conventions. The tradeoff is portability. If your engineers also use Cursor, Codex, or Copilot, CLAUDE.md becomes one more file to maintain instead of a common source of truth.

.cursorrules Cursor popularized project-level rule files for editor-native AI coding. It is a strong option for teams that live inside Cursor all day and want settings tuned to that environment. We would choose AGENTS.md over.cursorrules when repo portability matters, but some teams keep both because Cursor-specific behavior still benefits from local customization.

.github/copilot-instructions.md GitHub Copilot has its own instruction path, especially for agent workflows. This is the practical choice for organizations standardized on GitHub and wanting tight integration with pull requests, Actions, and repository workflows. The downside is familiar, it does not travel cleanly outside GitHub's ecosystem.

JULES.md and other vendor-specific files Google Jules and other tools have their own naming conventions. These can be useful if you want to take advantage of product-specific features or prompts. But the broader pattern in the market is clear, every vendor-specific file increases maintenance burden, while AGENTS.md is trying to be the shared layer underneath.

No agent instruction file at all This is still a valid alternative, especially for small repos with standard tooling and obvious conventions. The ETH Zurich findings suggest that if your project does not contain much non-inferable knowledge, a weak AGENTS.md may not justify its token cost. In those cases, clean code, a good README, and discoverable scripts may already be enough.

FAQ

What is AGENTS.md in simple terms?

It is a markdown file that tells AI coding agents how to work in your repository. Think of it as repo instructions for machines rather than for human contributors.

Who uses AGENTS.md?

It is used by open-source maintainers, internal engineering teams, and organizations experimenting with multiple AI coding tools. The project site says more than 60,000 repositories use it.

Which tools support AGENTS.md?

Support includes OpenAI Codex, Google Jules, Cursor, GitHub Copilot agent workflows, Aider, Devin, Factory, and others. The exact behavior varies by tool, but cross-vendor support is one of the format's main strengths.

How do I get started?

Create an AGENTS.md file in the root of your repository. Start with the exact commands for setup, build, and test, then add project-specific rules the agent would not easily infer from the code.

How long to set up?

A basic file can take 10 to 30 minutes if you already know the key commands and constraints. A good file takes longer because you need to trim it down to the instructions that actually help.

What should I put in AGENTS.md?

The best candidates are exact commands, local workflows, forbidden paths, security notes, and domain rules that are not obvious from the repository. Avoid stuffing it with generic architecture summaries unless they change agent behavior.

Does AGENTS.md actually improve agent performance?

Sometimes. ETH Zurich found that human-written context files improved success rates by about 4% on average, but they also increased step count and cost.

Can AGENTS.md hurt performance?

Yes. The same research found that LLM-generated context files reduced success rates by about 3% and increased inference costs by roughly 23%. A noisy file can be worse than no file.

Is AGENTS.md better than CLAUDE.md or .cursorrules?

It depends on your workflow. AGENTS.md is better if you want one portable standard across tools. Tool-specific files can still be better when you need behavior tailored to a single product.

Can I use AGENTS.md in a monorepo?

Yes. That is one of its strongest use cases. Many tools support nested files so each subproject can define local instructions.

Is AGENTS.md secure?

It is only as safe as your review process. Because some tools automatically include it in prompts, changes to AGENTS.md should be reviewed carefully, especially in shared repositories.

Should I auto-generate AGENTS.md with AI?

We would be cautious. Research suggests fully LLM-generated files often add cost and reduce effectiveness. Human-written guidance focused on non-obvious project knowledge performs better.

Share:

Sponsored
Favicon