Skip to main content
Favicon of LangGraph Platform

LangGraph Platform

LangGraph Platform helps teams build, deploy, and run stateful AI agents reliably in production with flexible model support.

Reviewed by Mathijs Bronsdijk · Updated Apr 13, 2026

ToolOpen Source + PaidUpdated 1 month ago
Open SourceSelf-HostedAPI AvailableFree TierSDK: Python, JavaScriptCloud, Self-hosted, On-prem
Used by LinkedIn, Uber, and KlarnaSupports durable execution with checkpointingHuman-in-the-loop for critical decisionsToken-by-token streaming for real-time UXFlexible deployment options including BYOCAchieved 57.3% production adoption in 2025Optimized for complex, stateful workflowsIntegrates with OpenAI, Anthropic, and more
Screenshot of LangGraph Platform website

What is LangGraph Platform?

LangGraph Platform is a deployment and orchestration stack for building stateful AI agents that need to survive real production conditions. It was created by LangChain Inc., the company behind the wider LangChain ecosystem, but LangGraph is intentionally usable on its own. Teams often pair it directly with OpenAI, Anthropic, Gemini, or open-source models without depending on LangChain abstractions. The core idea is simple: agents do not behave like normal web apps. They run for minutes, hours, or even days, keep track of evolving state, pause for human review, call tools, recover from failures, and need detailed traces of what happened.

That need for agent-specific infrastructure is what pushed LangGraph into existence. LangChain had seen companies like LinkedIn, Uber, and Klarna run into the limits of rigid agent frameworks, especially when workflows got messy, branched, or required custom control. So LangGraph took the opposite route. It gives developers lower-level primitives, graphs, nodes, edges, state, checkpoints, reducers, instead of prescribing one “correct” agent architecture. Under the hood, its design borrows from distributed systems ideas used in Google Pregel and Apache Beam, then adapts them for long-running agent workloads.

There is also an important naming wrinkle. What many teams knew as LangGraph Platform or LangGraph Cloud has been rebranded on the deployment side as LangSmith Deployment as of October 2025. In practice, people still often say “LangGraph” when they mean the whole stack: the open-source orchestration framework, the runtime server, the studio, and the managed deployment layer. Our research found that this is best understood as a tool for teams who want to build custom agents with production reliability, not a quick-start toy and not a no-code product.

Key Features

  • Graph-based orchestration: LangGraph models agent execution as nodes and edges with explicit shared state. That matters because developers can build loops, branches, retries, parallel paths, and multi-agent handoffs without fighting a rigid framework. For teams building complex workflows, this is the difference between a demo that works once and a system they can reason about six months later.

  • Durable execution and checkpointing: LangGraph can save a checkpoint at every step, so if a worker crashes or a node fails, the run resumes from the last saved state instead of starting over. This is especially valuable for long-running jobs that may span hours or days. It also gives teams a practical way to replay and debug failures without losing all prior work.

  • Human-in-the-loop interrupts: The platform can pause execution before sensitive actions, wait for approval, and then resume with full context intact. Teams can approve, edit, or reject an action before it runs. For finance, healthcare, or operations workflows, this creates a middle ground between full autonomy and full manual review.

  • Real-time streaming: LangGraph supports token streaming, intermediate node updates, and custom event payloads while an agent is running. Multiple consumers can listen to the same run at once, including a user interface, monitoring dashboard, and logging system. That visibility matters because production agents are much harder to trust when they disappear for 30 seconds and return with a final answer and no trail.

  • Short-term and long-term memory: Short-term memory lives in thread-scoped state and checkpoints, while long-term memory can be stored across sessions using a store abstraction. This gives teams a clean way to separate “what happened in this run” from “what this agent should remember about the user or system over time.” It is a more explicit memory model than many agent frameworks offer.

  • Two developer APIs: Teams can build with the declarative Graph API or the more procedural Functional API. Both sit on the same runtime and support the same core capabilities. In practice, this helps teams adopt LangGraph incrementally, especially if they want durable execution and tracing without rewriting every workflow as a graph on day one.

  • Multi-agent patterns: LangGraph supports subagents, handoffs, routers, and custom hierarchical systems. That flexibility is why companies like LinkedIn and Uber used it for more involved agent architectures rather than simple chatbot flows. It does not force one multi-agent pattern, which is useful if your design changes as the product matures.

  • Deployment flexibility: Teams can use the open-source library for free, deploy with Docker, use fully managed cloud deployment, choose BYOC on AWS, or run self-hosted enterprise infrastructure. This matters for companies with security or data residency requirements, and for startups trying to keep software costs low while still getting into production quickly.

  • Production observability through LangSmith: LangSmith adds tracing, evaluations, topic tagging, trajectory analysis, and quality monitoring on top of LangGraph runs. Agent systems are hard to debug with ordinary logs because the failure is often not a crash but a bad decision. LangSmith is built around that reality.

  • Performance advantages in orchestration: In one published benchmark running the same five-agent workflow 100 times, LangGraph completed more than 2x faster than open-source CrewAI. The difference came largely from lower orchestration overhead and more efficient state passing. For tool-heavy workflows, that can materially affect both latency and token cost.

Use Cases

One of the clearest stories in our research came from AppFolio. The company built Realm-X, a copilot for property managers, and after moving to LangGraph reported 2x higher response accuracy and savings of more than 10 hours per week. That is a good example of where LangGraph fits well: not just answering questions, but orchestrating domain-specific workflows where context, reliability, and state matter across multiple steps.

LinkedIn used LangGraph to build an AI recruiter with a hierarchical agent system. That is not a simple “chat with your docs” use case. Recruiting involves multiple subproblems, sourcing, screening, coordination, and likely different decision paths depending on role and candidate data. LangGraph’s graph model and support for hierarchical orchestration matched that kind of workload better than a linear chain.

Uber’s developer platform team used LangGraph to build a network of agents for large-scale code migrations and automated unit test generation. This is the sort of engineering workflow where failure recovery matters because the process may touch many files, run many tools, and take long enough that crashes or retries are not edge cases. The checkpointing model is a practical fit here.

Other documented deployments show similar variety. Elastic built threat detection agents. BlackRock used it for finance-focused copilots. City of Hope and Vizient used it in healthcare contexts. C.H. Robinson applied it in logistics. The pattern across these examples is not one industry or one model provider. It is teams with real workflows, domain logic, and a need for visibility into what the agent is doing at each step.

The broader survey data supports that picture. In LangChain’s 2025 State of Agent Engineering survey, 57.3% of respondents said they already had agents in production, and another 30.4% were actively building with plans to deploy. Among production use cases, customer service led at 26.5%, followed by research and data analysis at 24.4%. Large enterprises leaned heavily toward internal productivity use cases, which lines up with the kinds of LangGraph deployments we saw, internal QA, knowledge search, text-to-SQL, planning, and workflow automation.

Strengths and Weaknesses

Strengths:

  • LangGraph gives teams unusually fine control over agent behavior. That sounds abstract until you look at who adopted it. LinkedIn and Uber did not choose it because they wanted a prettier chatbot wrapper. They chose it because they needed custom orchestration patterns, hierarchical agents, and workflows that did not fit into a canned template.

  • Its durability story is stronger than what we usually see in agent tooling. Checkpointing after steps means a failed worker or temporary outage does not wipe out a long run. For teams running multi-step processes that may last hours, this is a real operational advantage over lighter frameworks that are easier to start with but weaker when things break.

  • The production tooling is more mature than many alternatives. LangSmith tracing, evaluation, and deployment infrastructure are built around the actual problems agent teams face, not just uptime metrics. That becomes important when the failure is “the agent took a weird path” rather than “the API returned 500.”

  • Performance appears meaningfully better than some peers in orchestration-heavy workflows. In the benchmark cited in our research, LangGraph ran the same five-agent workflow more than twice as fast as open-source CrewAI. It also used state passing more efficiently than AutoGen in scenarios with long histories.

  • It has real enterprise proof points across industries. AppFolio, LinkedIn, Uber, Elastic, BlackRock, City of Hope, Vizient, and C.H. Robinson are not all solving the same problem, but they all found the framework adaptable enough for production use.

Weaknesses:

  • The learning curve is real. LangGraph’s “minimal abstraction” philosophy gives flexibility, but it also asks developers to think in terms of state schemas, reducers, nodes, edges, and execution flow. Teams used to higher-level frameworks like CrewAI may get an initial prototype running faster elsewhere, even if they hit limits later.

  • The product naming and packaging can be confusing. There is the open-source LangGraph library, the platform formerly called LangGraph Platform or LangGraph Cloud, and the deployment layer now marketed as LangSmith Deployment. For buyers comparing tools, this can blur where the free framework ends and the paid platform begins.

  • Rapid evolution is a tradeoff. Some teams like that LangGraph is learning quickly from production deployments. Others want slower release cycles and stronger backward compatibility. Compared with a tool like ZenML, which tends to move more conservatively, LangGraph can feel less settled.

  • Checkpointing is helpful, but not free. In very high-throughput systems or workflows with many super-steps and parallel branches, checkpoint writes can add overhead and put pressure on storage backends. Most teams will not hit that wall early, but architecture choices around persistence become more important as workloads scale.

  • Security history deserves attention. In early 2026, researchers disclosed three CVEs affecting LangChain and LangGraph components, including path traversal, deserialization, and SQL injection issues. Patches were released quickly, but the episode is a reminder that agent infrastructure needs the same patch discipline and security review as any other production system.

Pricing

  • Open-source LangGraph library: Free
  • Self-hosted Docker deployment: Free software, infrastructure costs only
  • LangSmith Developer: Free, includes 5,000 base traces/month, 1 fleet agent, up to 50 fleet runs/month
  • LangSmith Plus: $39/seat/month, includes 10,000 base traces/month, 1 dev-sized deployment, unlimited fleet agents, up to 500 fleet runs/month
  • Deployment runtime on Plus: $0.0007/minute for development deployments, $0.0036/minute for production deployments
  • Enterprise: Custom pricing

The pricing story here is split in two. The core LangGraph framework is MIT licensed and free. Teams can use it commercially, self-host it, and deploy with Docker without paying LangChain for the software itself. That makes LangGraph unusually accessible for engineering-heavy teams that are comfortable running their own infrastructure.

The paid part begins when teams want LangSmith Deployment and LangSmith observability. The free Developer plan is enough for solo experimentation, but production teams will usually end up on Plus or Enterprise. The per-minute deployment pricing is worth watching. A continuously running production deployment on Plus can add up to roughly $259 to $310 per month depending on usage assumptions, before model costs.

The biggest gotcha is that model usage is billed separately by OpenAI, Anthropic, or whichever provider you use. LangSmith is not your token bill. For many teams, LLM spend will exceed platform spend, especially with frequent tool calls or larger models. So the real comparison is not “LangGraph is free” versus “LangSmith is paid,” but whether your team wants to trade infrastructure work for managed deployment, tracing, and evaluation.

Alternatives

CrewAI CrewAI is often the first alternative teams compare because it offers a more opinionated, role-based model for multi-agent systems. If your team likes the idea of clearly defined agents with delegated tasks and stronger defaults, CrewAI can feel easier to grasp. The tradeoff is flexibility. Our research found that LangGraph tends to win when the workflow stops looking like a neat team of roles and starts looking like a custom state machine with retries, branches, and human approvals.

AutoGen Microsoft’s AutoGen has been popular for conversational multi-agent experiments, especially where code execution and human interaction are central. It is a strong fit for research and prototyping. LangGraph looks stronger when teams move from “interesting demo” to “reliable production service,” especially if they need checkpointing, deterministic replay, and lower orchestration overhead.

LlamaIndex AgentWorkflow LlamaIndex is a natural option for teams whose agent product is tightly tied to retrieval and document-heavy workflows. Its DAG-style orchestration and RAG orientation can be a better starting point if search and retrieval are the center of the application. LangGraph is the better fit when retrieval is only one piece of a larger orchestration problem.

Semantic Kernel Semantic Kernel tends to appeal to enterprise teams, especially those already living in Microsoft and.NET environments. Its plugin model and planner concepts fit well in organizations that want AI features inside existing enterprise software patterns. LangGraph is usually the better choice when the team wants more direct control over long-running, stateful agent execution rather than a framework shaped around broader enterprise integration concerns.

ZenML ZenML is not a direct apples-to-apples competitor, but it comes up for teams thinking about ML workflows, experiment tracking, and LLMOps in one place. If your organization already has mature ML pipeline needs and wants stricter release stability, ZenML may be the safer operational choice. If the core problem is agent orchestration itself, LangGraph is more purpose-built.

OpenAI Swarm Swarm is a lightweight option for teams experimenting with multi-agent coordination around OpenAI models. It is useful for learning and quick prototypes. LangGraph is what teams usually reach for when they need persistence, deployment infrastructure, provider flexibility, and a path to operating the system under real load.

FAQ

What is LangGraph Platform used for?

It is used to build and run stateful AI agents that may need multiple steps, tool use, memory, human approval, and recovery from failures. Typical examples include internal copilots, research agents, workflow automation, and multi-agent systems.

Is LangGraph Platform the same as LangSmith Deployment?

Mostly on the deployment side, yes. The platform formerly called LangGraph Platform or LangGraph Cloud was rebranded as LangSmith Deployment in October 2025, while the open-source orchestration framework is still known as LangGraph.

Is LangGraph free?

The open-source LangGraph library is free under the MIT license. Managed deployment and observability through LangSmith are paid beyond the free tier.

How do I get started?

Most teams start locally with the Graph API or Functional API, build and test an agent, inspect runs in LangGraph Studio or LangSmith, then deploy through Docker or LangSmith Deployment. If you already have Python or JavaScript workflows, the Functional API can be a gentler entry point.

How long to set up?

A basic local prototype can be running in hours if your workflow is simple. A production setup takes longer because you need to think through persistence, checkpoint storage, model providers, security, and monitoring.

Do I need LangChain to use LangGraph?

No. LangGraph was built by LangChain Inc., but it can work independently with model providers like OpenAI, Anthropic, Gemini, and open-source models.

What makes LangGraph different from other agent frameworks?

The main difference is control. LangGraph gives lower-level primitives for stateful orchestration rather than pushing teams into one predefined agent pattern.

Can LangGraph handle long-running agents?

Yes. This is one of its strongest areas. It supports checkpointing, task queues, and resumable execution for workflows that may run for hours or days.

Does it support human approval steps?

Yes. LangGraph can interrupt execution before certain actions, wait for a person to approve, edit, or reject the action, then resume with the saved context.

Is LangGraph good for multi-agent systems?

Yes, especially if you want custom coordination patterns. It supports routers, handoffs, subgraphs, and hierarchical orchestration rather than only one style of multi-agent design.

What are the main downsides?

The biggest downside is complexity. LangGraph gives a lot of control, but that means more design responsibility, a steeper learning curve, and more moving parts than a higher-level framework.

Is LangGraph production-proven?

Yes. Our research found named deployments and case studies from companies including LinkedIn, Uber, AppFolio, Elastic, BlackRock, City of Hope, Vizient, and C.H. Robinson, plus survey data showing broad production adoption of agents more generally.

Categories:

Share:

Similar to LangGraph Platform

Favicon

 

  
  
Favicon

 

  
  
Favicon