Aider
Aider is an open-source AI coding assistant for the terminal that edits code in your existing git repo using your chosen model.
Reviewed by Mathijs Bronsdijk · Updated Apr 18, 2026

What is Aider?
Aider is an open-source AI pair programming tool that runs in the terminal and edits code inside your existing git repository. It was created by Paul Gauthier and released under the Apache 2.0 license, which matters because it puts Aider in a very different category from most AI coding products. You are not buying into a closed editor or a single model vendor. You install it, point it at your repo, connect the model you want, and work with it through chat. For developers who already live in the terminal, that feels less like adopting a new IDE and more like adding a capable collaborator to an existing workflow.
What makes Aider stand out in our research is not just that it can edit code, many tools can do that now. It is the way it ties every change to git, keeps track of what files are in scope, and tries to understand the wider repository through a code map instead of dumping whole projects into context. Each AI change is committed automatically with a message, so you can inspect, diff, or undo the work using normal git habits. That design attracts developers who care about traceability, privacy, and control over which model is actually touching their code.
Aider is used by individual developers, open-source contributors, and teams that want model flexibility. It supports Anthropic, OpenAI, DeepSeek, Gemini, local models through Ollama, and many more through LiteLLM. In practice, that means one team might use Claude Sonnet for day-to-day coding, another might experiment with DeepSeek to cut costs, and a third might keep everything local for compliance reasons. Aider has also built a reputation around benchmark performance, with strong SWE-bench results and a public leaderboard that compares model performance on real coding tasks. So the story here is not “AI in your editor.” It is “AI coding help, on your terms, inside your repo, with receipts.”
Key Features
-
Git-first editing: Aider automatically commits AI-generated changes to git, and it can even commit pre-existing dirty changes before it starts editing. This matters because AI edits stop feeling mysterious. You can see exactly what happened, revert quickly with
/undo, and keep a clean history of what the model changed versus what you changed. -
Repository map: Aider builds a structural map of your codebase instead of stuffing full files into every prompt. The default map budget is 1,000 tokens, and users can tune it with
--map-tokens. In larger repos, this is one of the reasons Aider remains usable where naive context packing gets expensive and noisy. -
Broad model support: Through LiteLLM, Aider works with hundreds of models and many providers, including Anthropic, OpenAI, DeepSeek, Gemini, Groq, Mistral, Azure, and local Ollama models. That flexibility gives teams a real way to trade off cost, privacy, and quality instead of being locked into one vendor’s roadmap.
-
Architect and Editor mode: Aider can split work across two models, one to reason about the solution and another to apply file edits. In Aider’s own testing, pairing OpenAI o1-preview as Architect with another model as Editor reached about 85% on its code editing benchmark. For teams hitting the limits of single-model prompting, this is one of the more interesting workflow ideas in the product.
-
Testing and lint integration: Aider can run linters and test suites after making changes, then try to fix failures iteratively. It includes built-in support for many common languages and also supports custom commands through
--lint-cmdand test configuration. That closes the loop between “the model wrote code” and “the code actually passes checks.” -
100+ language support: The project says it works with over 100 programming languages, with documented support across Python, JavaScript, TypeScript, Rust, Go, C++, Ruby, PHP, HTML, CSS, and more. For polyglot teams, that means one assistant can stay useful across backend, frontend, infrastructure scripts, and tests.
-
Prompt caching support: With supported providers like Anthropic and DeepSeek, Aider can cache repeated prompt content such as repo maps and read-only files. In one documented user workflow, costs dropped from roughly $0.07 to $0.10 per command down to about $0.02 to $0.04. If you use Aider heavily, that difference adds up fast.
-
Multiple chat modes: Aider supports code mode, ask mode, architect mode, and help mode. This sounds small, but it changes how people use the tool. You can talk through a design in ask mode first, then switch into code mode once the plan is clear, which often leads to better edits than asking for everything in one shot.
-
Terminal, browser, and IDE access: The core product is terminal-first, but there is also an experimental browser UI and a JetBrains plugin. That gives Aider a wider reach than “CLI only,” while still keeping the terminal workflow as the main experience.
Use Cases
One of the clearest real-world stories from our research came from a developer using Aider to migrate PowerShell test suites from Pester v4 to Pester v5. This was not a toy example. The work involved smart find-and-replace operations, comment cleanup, code block consolidation, and generating new test files. The developer found that model choice changed the economics dramatically. OpenAI o1-preview was accurate but expensive, around $0.70 per command with roughly 40-second response times. Switching to Claude 3.5 Sonnet with prompt caching brought the cost down to about $0.02 to $0.04 per command, which turned Aider from a costly experiment into something practical for repeated refactoring work.
Another example from the project’s own materials shows Aider being used to build and modify real applications rather than just patch snippets. Example sessions include creating a Flask app from scratch, updating existing JavaScript game code, generating tests, and working through debugging loops across multiple files. What matters here is the pattern. Aider tends to be strongest when a developer can point it at a focused slice of a repository, explain the intended change in plain language, then let the tool edit the code and verify the result through tests or linting.
We also found a practical automation story where a developer used Aider to help build lead enrichment functionality for a CRM workflow. The developer called a function that did not exist yet, then asked Aider to implement it. The total API cost was about $0.27. That is a small example, but it captures where Aider can be compelling for founders and engineers building internal tools. If the work is bounded, the repository is already in git, and you want direct control over the model and the resulting diff, Aider can turn small engineering chores into cheap, fast iterations.
Aider is also used by teams with privacy or deployment constraints. Because it can connect to local models through Ollama and other OpenAI-compatible endpoints, it fits organizations that cannot or do not want to send source code to a hosted coding assistant. In that setting, the “use case” is not one app or one feature. It is a whole development policy, where AI help is allowed only if the company controls where the code goes.
Strengths and Weaknesses
Strengths:
Aider’s biggest strength is its git workflow. In our research, this came up again and again as the reason experienced developers keep it around even when they also use Cursor or Copilot. Every AI edit becomes a visible commit with a message, and /undo gives a fast exit when the model goes sideways. Compared with tools that blur AI output into your editor session, Aider gives you a paper trail.
It is also unusually flexible on models. Most coding assistants ask you to accept their preferred provider, their preferred pricing, and their preferred product shape. Aider does not. Teams can use Claude, OpenAI, DeepSeek, Gemini, or local models, and switch as costs or quality change. That matters more in 2025 than it did a year ago because model performance and pricing are moving fast.
Benchmark credibility is another real strength. Aider publishes benchmark data and has posted strong results on SWE-bench and its own polyglot benchmark. That does not guarantee it will solve your repo’s weirdest problems, but it is better evidence than vague claims about productivity. The Architect and Editor setup also shows that the project is experimenting seriously with how to improve code editing quality, not just layering on UI features.
Weaknesses:
Aider is still a terminal tool first, and that is a real barrier for some teams. If your developers want inline completions, visual panes, and a polished IDE-native experience, Cursor or Copilot will feel easier on day one. Aider has a browser UI and JetBrains support, but the center of gravity is still the command line.
It can also struggle with complex, multi-step refactors if the prompt is too broad. One detailed user review in our research described prompt misinterpretation, overwritten earlier changes during sequential edits, and trouble with tricky local scope dependencies inside single files. The takeaway was not that Aider fails all the time. It was that you often get better results by breaking work into smaller instructions than by asking for a sweeping rewrite in one prompt.
There are also some rough edges in newer interfaces and edge cases. The experimental browser interface has been reported as less consistent than the terminal version, and one user noted odd behavior when working with Markdown. Compared with more commercial products, Aider can feel more like a powerful tool maintained by serious builders than a polished product wrapped for broad non-technical adoption.
Pricing
-
Open source software: $0 Aider itself is free under the Apache 2.0 license. There is no required subscription to use the tool, which is a big part of its appeal for individual developers and teams that want to avoid per-seat AI pricing.
-
API usage: Variable In practice, most users pay for tokens from Anthropic, OpenAI, DeepSeek, or another model provider. Our research found typical monthly spend around $30 to $60 for many active users, though this can be lower or much higher depending on model choice and usage patterns.
-
Low-cost usage with caching: Roughly $0.02 to $0.04 per command in one documented Claude workflow Prompt caching can materially change the economics. One user reported costs dropping from about $0.07 to $0.10 per command to about $0.02 to $0.04 after enabling caching with Claude.
-
High-reasoning usage: Roughly $0.70 per command in one o1-preview workflow If you choose premium reasoning models, costs rise quickly. The same user who cut costs with Claude had previously spent about $0.70 per command using OpenAI o1-preview, with response times around 40 seconds.
The pricing story here is simple but worth thinking about carefully. Aider is “free” only in the sense that the wrapper is free. Your real bill comes from model usage. For solo developers who work in bursts, that can be cheaper than a fixed subscription to Cursor or Copilot. For heavy daily users on expensive models, direct API billing can exceed a flat monthly plan. The upside is transparency. You know what you are paying for, and you can switch models when the math stops working.
Alternatives
Cursor Cursor is the most common alternative for developers comparing modern AI coding tools. It is a VS Code-based editor with inline completions, agent workflows, visual diffs, and a much more polished GUI experience. If your team wants AI built directly into the editor and does not mind subscription pricing, Cursor will feel more approachable. Aider tends to win when developers care more about git traceability, terminal workflows, and model freedom than about editor convenience.
GitHub Copilot Copilot is still the default choice for many teams because it fits into tools they already use. It shines at inline suggestions and low-friction adoption inside VS Code, JetBrains, Vim, and Visual Studio. Compared with Aider, it is less opinionated about git history and less focused on multi-file, repo-aware chat editing. Teams that want “help while typing” often choose Copilot. Teams that want “make this change across the repo and show me the commit” are often more drawn to Aider.
Augment Code Augment Code is aimed more squarely at enterprises with large, tangled codebases and compliance requirements. It emphasizes deep codebase understanding, very large context handling, and enterprise certifications like SOC 2 Type II. If a company is buying through procurement and needs formal compliance paperwork, Augment may be the safer choice. Aider is the better fit for teams that want open-source flexibility, lower cost, and direct control over which model is running.
Tabnine Tabnine has long focused on privacy, security, and deployment flexibility, including self-hosted options. Organizations with strict data handling rules may compare it closely with Aider, especially if they want AI assistance without sending code to public APIs. The difference is workflow style. Tabnine feels more like an AI layer around coding environments, while Aider feels like a git-aware coding partner in the terminal.
Replit Agent Replit Agent sits at the other end of the spectrum. It is built for a browser-based environment where the AI can take on larger chunks of planning, coding, and deployment. Founders prototyping products quickly may like that autonomy. Aider is much less autonomous and much more controlled. If you want the AI to work with you inside an existing repo, Aider fits better. If you want the AI to build a lot for you inside a hosted environment, Replit is the more natural comparison.
FAQ
What is Aider used for?
Aider is used for editing code with AI inside a git repo. Developers use it for refactoring, writing tests, debugging, generating features, and making coordinated changes across multiple files.
Is Aider open source?
Yes. Aider is open source under the Apache 2.0 license.
How do I get started?
Install Aider, open a git repository in your terminal, connect an API key for a supported model, and launch it on the files you want to work on. The official docs support several install paths, including pip, pipx, uv, and the dedicated installer.
How long to set up?
For a developer who already has Python and a model API key, setup is usually a matter of minutes. If your team needs local models, shared config files, or custom lint and test commands, expect a bit longer.
Does Aider work with Claude, OpenAI, and DeepSeek?
Yes. Those are some of the most commonly used model providers with Aider, and it also supports many others through LiteLLM.
Can I use Aider with local models?
Yes. Aider supports local setups through Ollama and other OpenAI-compatible endpoints. That is one reason privacy-focused teams look at it seriously.
Does Aider work in an IDE?
Yes, but the main experience is still terminal-first. There is a JetBrains plugin and an experimental browser UI, though the CLI is the most mature interface.
What programming languages does Aider support?
The project says it supports over 100 languages. Common ones include Python, JavaScript, TypeScript, Go, Rust, C++, Ruby, and PHP.
How much does Aider actually cost per month?
The software itself is free, but model usage is not. In our research, a typical active-user range was about $30 to $60 per month, though careful model choice and caching can push that down.
Is Aider better than Cursor?
It depends on what you value. Aider is stronger on git integration, model flexibility, and terminal workflows. Cursor is stronger on UI polish, inline editing, and editor-native convenience.
Can Aider run tests and linters after edits?
Yes. It can run built-in or custom lint commands and test commands after changes, then try to fix issues based on the results.
What are the biggest downsides?
The biggest tradeoffs are the terminal-first workflow and the need to prompt carefully on complex tasks. It is powerful, but it usually works best when you give it focused instructions instead of asking for a huge rewrite all at once.